{"abstract": "We investigate the problem of delay constrained maximal information collection for CSMA-based wireless sensor networks. We study how to allocate the maximal allowable transmission delay at each node, such that the amount of information collected at the sink is maximized and the total delay for the data aggregation is within the given bound. We formulate the problem by using dynamic programming and propose an optimal algorithm for the optimal assignment of transmission attempts. Based on the analysis of the optimal solution, we propose a distributed greedy algorithm. It is shown to have a similar performance as the optimal one.", "keywords": "algorithms;design;performance;sensor networks;data aggregation;real-time traffic;csma/ca;delay constrained transmission", "title": "Real-Time Data Aggregation in Contention-Based Wireless Sensor Networks"} {"abstract": "This paper describes a method for detecting event trigger words in biomedical text based on a word sense disambiguation (WSD) approach. We first investigate the applicability of existing WSD techniques to trigger word disambiguation in the BioNLP 2009 shared task data, and find that we are able to outperform a traditional CRF-based approach for certain word-types. On the basis of this finding, we combine the WSD approach with the CRF, and obtain significant improvements over the standalone CRF, gaining particularly in recall.", "keywords": "biomedical text;machine learning;information extraction", "title": "word sense disambiguation for event trigger word detection"} {"abstract": "The lack of architecturally-significant mechanisms for aspectual composition might artificially hinder the specification of stable and reusable design aspects. Current aspect-oriented approaches at the architecture-level tend to mimic programming language join point models while overlooking mainstream architectural concepts such as styles and their semantics. Syntax-based pointcuts are typically used to select join points based on the names of architectural elements, exposing architecture descriptions to pointcut fragility and reusability problems. This paper presents style-based composition, a new flavor of aspect composition at the architectural level based on architectural styles. We propose style-based join point models and provide a pointcut language that supports the selection of join points based on style-constrained architectural models. Stability and reusability assessments of the proposed style-based composition model were carried out through three case studies involving different styles. The interplay of style-based pointcuts and some style composition techniques is also discussed.", "keywords": "architectural styles;architectural aspects;pointcut languages;style-based composition", "title": "composing architectural aspects based on style semantics"} {"abstract": "This paper describes our use of pen-based electronic classrooms to enhance several computer science courses. After presenting our motivation for undertaking this work, and its relevance to the growing interest in using tablet PC's in the classroom, we present an overview of our use of this technology to engage students during class. Finally, we present the students' reaction to the approach as measured through attitude surveys and a focus group.", "keywords": "computer science;present;groupware;use;technologies;pen;pen-based computing;motivation;survey;tablet;computation;tablet pcs;relevance;computer science curriculum;paper;focus-group;attitude;collaborative computing;class;student", "title": "using pen-based computers across the computer science curriculum"} {"abstract": "We show how to connect the syntactic and the functional correspondence for normalisers and abstract machines implementing hybrid (or layered) reduction strategies, that is, strategies that depend on subsidiary sub-strategies. Many fundamental strategies in the literature are hybrid, in particular, many full-reducing strategies, and many full-reducing and complete strategies that deliver a fully reduced result when it exists. If we follow the standard program-transformation steps the abstract machines obtained for hybrids after the syntactic correspondence cannot be refunctionalised, and the junction with the functional correspondence is severed. However, a solution is possible based on establishing the shape invariant of well-formed continuation stacks. We illustrate the problem and the solution with the derivation of substitution-based normalisers for normal order, a hybrid, full-reducing, and complete strategy of the pure lambda calculus. The machine we obtain is a substitution-based, eval/apply, open-terms version of Pierre Crgut's full-reducing Krivine machine KN.", "keywords": "operational semantics;program transformation;reduction strategies;abstract machines;full reduction", "title": "On the syntactic and functional correspondence between hybrid (or layered) normalisers and abstract machines"} {"abstract": "The purpose of this research is to present a case-based analytic method for a service-oriented value chain and a sustainable network design considering customer, environmental and social values. Enterprises can enhance competitive advantage by providing more values to all stakeholders in the network. Our model employs a stylized database to identify successful cases of value chain application under similar company marketing conditions, illustrating potential value chains and sustainable networks as references. This work first identifies economic benefits, environmental friendliness and social contribution values based on prior studies. Next, a search engine which is developed based on the rough set theory will search and map similarities to find similar or parallel cases in the database. Finally, a visualized network mapping will be automatically generated to possible value chains. This study applies a case-based methodology to assist enterprises in developing a service-oriented value chain design. For decision makers, this can reduce survey time and inspire innovative works based on previous successful experience. Besides, successful ideas from prior cases can be reused. In addition to customer values, this methodology incorporates environment and social values that may encourage a company to build their value chain in a more comprehensive and sustainable manner. This is a pilot study which attempts to utilize computer-aided methodology to assist in service or value-related design. The pertinent existing solutions can be filtered from an array of cases to engage the advantages from both product-oriented and service-oriented companies. Finally, the visualized display of value network is formed to illustrate the results. A customized service-oriented value chains which incorporates environment and social values can be designed according to different conditions. Also, this system engages the advantages from both product-oriented and service-oriented companies to build a more comprehensive value network. Apart from this, the system can be utilized as a benchmarking tool, and it could remind the decision makers to consider potential value from a more multifaceted perspective. This is the first paper that applied a computer-aided method to design service-oriented value chains. This work also can serve as a decision support and benchmarking system because decision makers can develop different value networks according to various emphasized values. Finally, the visualized display of value network can improve the communication among stakeholders.", "keywords": "value chain design;sustainable network design;case-based reasoning", "title": "A case-based method for service-oriented value chain and sustainable network design"} {"abstract": "We present a subdivision based algorithm for multi-resolution Hexahedral meshing. The input is a bounding rectilinear domain with a set of embedded 2-manifold boundaries of arbitrary genus and topology. The algorithm first constructs a simplified Voronoi structure to partition the object into individual components that can be then meshed separately. We create a coarse hexahedral mesh for each Voronoi cell giving us an initial hexahedral scaffold. Recursive hexahedral subdivision of this hexahedral scaffold yields adaptive meshes. Splitting and Smoothing the boundary cells makes the mesh conform to the input 2-manifolds. Our choice of smoothing rules makes the resulting boundary surface of the hexahedral mesh as C 2 continuous in the limit (C 1 at extra-ordinary points), while also keeping a definite bound on the condition number of the Jacobian of the hexahedral mesh elements. By modifying the crease smoothing rules, we can also guarantee that the sharp features in the data are captured. Subdivision guarantees that we achieve a very good approximation for a given tolerance, with optimal mesh elements for each Level of Detail (LoD).", "keywords": "hexahedral meshing;mesh generation;subdivision meshes;3d meshing", "title": "volume subdivision based hexahedral finite element meshing of domains with interior 2-manifold boundaries"} {"abstract": "This paper provides a survey of two classes of methods that can be used in determining and improving the quality of individual files or groups of files. The first are edit/imputation methods for maintaining business rules and for imputing for missing data. The second are methods of data cleaning for finding duplicates within files or across files.", "keywords": "integer programming;set covering;data cleaning;approximate string comparison;unsupervised and supervised learning", "title": "Methods for evaluating and creating data quality"} {"abstract": "Purpose - The purpose of this paper is to solve generic magnetostatic problems by BEM, by studying how to use a boundary integral equation (BIE) with the double layer charge as unknown derived from the scalar potential. Design/methodology/approach - Since the double layer charge produces only the potential gap without disturbing the normal magnetic flux density, the field is accurately formulated even by one BIE with one unknown. Once the double layer charge is determined, Biot-Savart's law gives easily the magnetic flux density. Findings - The BIE using double layer charge is capable of treating robustly geometrical singularities at edges and corners. It is also capable of solving the problems with extremely high magnetic permeability. Originality/value - The proposed BIE contains only the double layer charge while the conventional equations derived from the scalar potential contain the single and double layer charges as unknowns. In the multiply connected problems, the excitation potential in the material is derived from the magnetomotive force to represent the circulating fields due to multiply connected exciting currents.", "keywords": "boundary integral equation;double layer charge;multiply connected problem;nonlinear magnetostatic analysis;scalar potential;integral equations;electric current", "title": "Nonlinear magnetostatic BEM formulation using one unknown double layer charge"} {"abstract": "The uterine electrical activity is an efficient parameter to study the uterine contractility. In order to understand the ionic mechanisms responsible for its generation, we aimed at building a mathematical model of the uterine cell electrical activity based upon the physiological mechanisms. First, based on the voltage clamp experiments found in the literature, we focus on the principal ionic channels and their cognate currents involved in the generation of this electrical activity. Second, we provide the methodology of formulations of uterine ionic currents derived from a wide range of electrophysiological data. The model is validated step by step by comparing simulated voltage-clamp results with the experimental ones. The model reproduces successfully the generation of single spikes or trains of action potentials that fit with the experimental data. It allows analyzing ionic channels implications. Likewise, the calcium-dependent conductance influences significantly the cellular oscillatory behavior.", "keywords": "myometrial ionic currents;uterine excitability;voltage clamp;action potential;electrophysiological model", "title": "Mathematical modeling of electrical activity of uterine muscle cells"} {"abstract": "This paper presents a new generic filtering algorithm which simultaneously considers n conjunctions of constraints as well as those constraints mentioning some variables Yk Y k of the pairs X , Y k ( 1 ? k ? n ) occurring in these conjunctions. The main benefit of this new technique comes from the fact that, for adjusting the bounds of a variable X according to n conjunctions, we do not perform n sweeps in an independent way but rather synchronize them. We then specialize this technique to the non-overlapping rectangles constraint where we consider the case where several rectangles of height one have the same X coordinate for their origin as well as the same length. For this specific constraint we come up with an incremental bipartite matching algorithm which is triggered while we sweep over the time axis. We illustrate the usefulness of this new pruning method on a timetabling problem, where each task cannot be interrupted and requires the simultaneous availability of n distinct persons. In addition each person has his own periods of unavailability and can only perform one task at a time.", "keywords": "global constraint;filtering algorithm;sweep;timetabling", "title": "Sweep synchronization as a global propagation mechanism"} {"abstract": "Reversibility is a key issue in the interface between computation and physics, and of growing importance as miniaturization progresses towards its physical limits. Most foundational work on reversible computing to date has focussed on simulations of low-level machine models. By contrast, we develop a more structural approach. We show how high-level functional programs can be mapped compositionally (i.e. in a syntax-directed fashion) into a simple kind of automata which are immediately seen to be reversible. The size of the automaton is linear in the size of the functional term. In mathematical terms, we are building a concrete model of functional computation. This construction stems directly from ideas arising in Geometry of Interaction and Linear Logic-but can be understood without any knowledge of these topics. In fact, it serves as an excellent introduction to them. At the same time, an interesting logical delineation between reversible and irreversible forms of computation emerges from our analysis. ", "keywords": "reversible computation;linear combinatory algebra;term-rewriting;automata;geometry of interaction", "title": "A structural approach to reversible computation"} {"abstract": "This paper introduces into the evolution of Electronic Data Interchange (EDI) and the Universal Business Language (UBL). an OASIS standard to encode and customize business documents. It shows its peculiarities and also sets it into a broader picture showing where UBL is positioned in relationship to business processes and standards like BPEL and BPMN.", "keywords": "universal business language ;electronic data interchange ;e-business", "title": "UBL: The DNA of next generation e-Business"} {"abstract": "The classical Internet has confronted many drawbacks in terms of network security, scalability, and performance, although it has strongly influenced the development and evolution of diverse network technologies, applications, and services. Therefore, new innovative research on the Future Internet has been performed to resolve the inherent weaknesses of the traditional Internet, which, in turn, requires new at-scale network testbeds and research infrastructure for large-scale experiments. In this context, K-GENI has been developed as an international programmable Future Internet testbed in the GENI spiral-2 program, and it has been operational between the USA (GENI) and Korea (KREONET) since 2010. The K-GENI testbed and the related collaborative efforts will be introduced with two major topics in this paper: (1) the design and deployment of the K-GENI testbed and (2) the federated meta operations between the K-GENI and GENI testbeds. Regarding the second topic in particular, we will describe how meta operations are federated across K-GENI between GMOC (GENI Meta Operations Center) and DvNOC (Distributed virtual Network Operations Center on KREONET/K-GENI), which is the first trial of an international experiment on the federated network operations over GENI.", "keywords": "geni;k-geni;kreonet;federation;dvnoc", "title": "K-GENI testbed deployment and federated meta operations experiment over GENI and KREONET"} {"abstract": "In this paper, the main measure, an amount of information, of the information theory is analyzed and corrected. The three conceptions of the theory on the microstate, dissipation pathways, and self-organization levels with a tight connection to the statistical physics are discussed. The concepts of restricted information were introduced as well as the proof of uniqueness of the entropy function, when the probabilities are rational numbers, is presented. The artificial neural network (ANN) model for mapping the evaluation of transmitted information has been designed and experimentally approbated in the biological area.", "keywords": "information theory;entropy;amount of information;artificial neural networks", "title": "Conceptions and modeling for transmitted information evaluation by ANN"} {"abstract": "Nowadays, organizations face with a very high competitiveness and for this reason they have to continuously improve their processes. Two key aspects to be considered in the software processes management in order to promote their improvement are their effective modeling and evaluation. The integrated management of these key aspects is not a trivial task, the huge number and diversity of elements to take into account makes it complex the management of software processes. To ease and effectively support this management, in this paper we propose FMESP: a framework for the integrated management of the modeling and measurement of software processes. FMESP incorporates the conceptual and technological elements necessary to ease the integrated management of the definition and evaluation of software processes. From the measurement perspective of the framework and in order to provide the support for the software process measurement at model level a set of representative measures have been defined and validated.", "keywords": "software process modeling;software measurement;conceptual framework;software engineering environment", "title": "FMESP: Framework for the modeling and evaluation of software processes"} {"abstract": "In this paper, firstly, the control problem for the chaos synchronization of discrete-time chaotic (hyperchaotic) systems with unknown parameters are considered. Next, back-stepping control law is derived to make the error signals between drive 2D discrete-time chaotic system and response 2D discrete-time chaotic system with two uncertain parameters asymptotically synchronized. Finally, the approach is extended to the synchronization problem for 3D discrete-time chaotic system with two unknown parameters. Numerical simulations are presented to show the effectiveness of the proposed chaos synchronization scheme.", "keywords": "anticipated function synchronization;backstepping design;fold maps;henon maps", "title": "ANTICIPATED FUNCTION SYNCHRONIZATION WITH UNKNOWN PARAMETERS OF DISCRETE-TIME CHAOTIC SYSTEMS"} {"abstract": "The most important feature of scoliosis is the lateral curvature of the spine. It can be treated either conservatively or by surgery; however, treatment choice depends mainly on curve progression which is determined by frequent curve assessment. This is a review of methods of curve measurement and proof of the relationship between them.", "keywords": "scoliosis;curve progression;curve measurement", "title": "Methods of assessing spinal radiographs in scoliosis are functions of its geometry"} {"abstract": "An applicable method is developed for the identification and feedback control of natural convection. The Boussinesq equation is reduced to a small set of ordinary differential equations by means of the KarhunenLove Galerkin procedure [Int. J. Heat Mass Transfer 39 (1996) 3311]. Based on this low-dimensional dynamic model, a feedback control synthesis is constructed by first performing an extended Kalman filter estimate of the velocity and temperature fields to treat the measurement errors and then developing the optimal feedback law by means of the linear quadratic regulator theory. The present method allows for the practical implementation of modern control concepts to many flow systems including natural convection.", "keywords": "karhunenlove galerkin procedure;feedback control;natural convection", "title": "Feedback control of natural convection"} {"abstract": "The demand assigned capacity management (DACM) problem in IP over optical (IPO) network aims at devising efficient bandwidth replenishment schedules from the optical domain conditioned upon traffic evolution processes in the IP domain. A replenishment schedule specifies the location, sizing, and sequencing of link capacity expansions to support the growth of Internet traffic demand in the IP network subject to economic considerations. A major distinction in the approach presented in this paper is the focus of attention on the economics of \"excess bandwidth\" in the IP domain, which can be viewed as an inventory system that is endowed with fixed and variable costs and depletes with increase in IP traffic demand requiring replenishment from the optical domain. We, develop mathematical models to address the DACM problem in IPO networks based on a class of inventory management replenishment methods. We apply the technique to IPO networks that implement capacity adaptive routing in the IP domain and networks without capacity adaptive routing. We analyze the performance characteristics under both scenarios, in terms of minimizing cumulative replenishment cost over an interval of time. For the non-capacity adaptive routing scenario, we consider a shortest path approach in the IP domain, specifically OSPF. For the capacity adaptive scenario, we use an online constraint-based routing scheme. This study. represents an application of integrated traffic engineering which concerns collaborative decision making targeted towards network performance improvement that takes into consideration traffic demands, control capabilities, and network assets at different levels in the network hierarchy.", "keywords": "ason;bandwidth replenishment;capacity management;demand assigned capacity management;gmpls;integrated traffic engineering;inventory management;ip over optical networks;ipo;mpls;network performance optimization;traffic engineering", "title": "Demand assigned capacity management (DACM) in IP over optical (IPO) networks"} {"abstract": "We propose a new parallel algorithm to find all modules of a large fault tree. An experiment is used to compare the linear time algorithm and parallel algorithm. The result shows that our method is efficient in handling large-scale fault trees.", "keywords": "modularization;parallel algorithm;fault tree;directed acyclic graph", "title": "Parallel algorithm for finding modules of large-scale coherent fault trees"} {"abstract": "A method based on Particle Swarm Optimization (PSO) is proposed and described for finding subspaces that carry meaningful information about the presence of groups in high-dimensional data sets. The advantage of using PSO is that not only the variables that are responsible for the main data structure are identified but also other subspaces corresponding to local optima. The characteristics of the method are shown on two simulated data sets and on a real matrix coming from the analysis of genomic microarrays. In all cases, PSO allowed to explore different subspaces and to discover meaningful structures in the analyzed data. ", "keywords": "variable selection;clustering;particle swarm optimization ;swarm intelligence", "title": "Finding relevant clustering directions in high-dimensional data using Particle Swarm Optimization"} {"abstract": "Although there are many neural network (NN) algorithms for prediction and for control, and although methods for optimal estimation (including filtering and prediction) and for optimal control in linear systems were provided by Kalman in 1960 (with nonlinear extensions since then), there has been, to my knowledge, no NN algorithm that learns either Kalman prediction or Kalman control (apart from the special case of stationary control). Here we show how optimal Kalman prediction and control (KPC), as well as system identification, can be learned and executed by a recurrent neural network composed of linear-response nodes, using as input only a stream of noisy measurement data. The requirements of KPC appear to impose significant constraints on the allowed NN circuitry and signal flows. The NN architecture implied by these constraints bears certain resemblances to the local-circuit architecture of mammalian cerebral cortex. We discuss these resemblances, as well as caveats that limit our current ability to draw inferences for biological function. It has been suggested that the local cortical circuit (LCC) architecture may perform core functions (as yet unknown) that underlie sensory, motor, and other cortical processing. It is reasonable to conjecture that such functions may include prediction, the estimation or inference of missing or noisy sensory data, and the goal-driven generation of control signals. The resemblances found between the KPC NN architecture and that of the LCC are consistent with this conjecture.", "keywords": "kalman filter;kalman control;recurrent neural network;local cortical circuit", "title": "Neural network learning of optimal Kalman prediction and control"} {"abstract": "Expansion of the DMP approach for gravitation compensation in elastic robots. Grid-based mixture approach based on bilinear interpolation of learned trajectories. Model-free gravitation compensation in directed limb movements.", "keywords": "passive compliance;compliant robotics;movement primitives;reinforcement learning;robot arm;directed limb movement", "title": "Learning point-to-point movements on an elastic limb using dynamic movement primitives"} {"abstract": "We propose a new multipurpose audio watermarking scheme in which two watermarks are used. For intellectual property protection, audio clip is divided into frames and robust watermark is embedded. At the same time, the feature of each frame is extracted, and it is quantized as semi-fragile watermark. Then, the frame is cut into sections and the semi-fragile watermark bits are embedded into these sections. For content authentication, the semi-fragile watermark extracted from each frame is compared with the watermark generated from the same frame to judge whether the watermarked audio is tampered, and locate the tampered position. Experimental results show that our scheme is inaudibility. The two watermark schemes are all robust to common signal processing operations such as additive noise, resampling, re-quantization and low-pass filtering, and the semi-fragile watermark scheme can achieve tampered detection and location.", "keywords": "multipurpose audio watermarking;robust watermark;copyright protection;semi-fragile watermark;content authentication", "title": "Audio dual watermarking scheme for copyright protection and content authentication"} {"abstract": "To date, a large number of algorithms to solve the problem of autonomous exploration and mapping has been presented. However, few efforts have been made to compare these techniques. In this paper, an extensive study of the most important methods for autonomous exploration and mapping of unknown environments is presented. Furthermore, a representative subset of these techniques has been chosen to be analysed. This subset contains methods that differ in the level of multi-robot coordination and in the grade of integration with the simultaneous localization and mapping (SLAM) algorithm. These exploration techniques were tested in simulation and compared using different criteria as exploration time or map quality. The results of this analysis are shown in this paper. The weaknesses and strengths of each strategy have been stated and the most appropriate algorithm for each application has been determined.", "keywords": "autonomous exploration;mapping of unknown environments;path planning for multiple mobile robot systems", "title": "A comparison of path planning strategies for autonomous exploration and mapping of unknown environments"} {"abstract": "We propose and evaluate novel reliable multicast protocols that combine active repair service (a.k.a. local recovery) and parity encoding (a.k.a. forward error correction or FEC) techniques. We show that, compared to other repair service protocols, our protocols require less buffer inside the network, maintain the low bandwidth requirements of previously proposed repair service/FEC combination protocols, and reduce the amount of FEC processing at repair servers, moving more of this processing to the end-hosts. We also examine repair service/FEC combination protocols in an environment where loss rates differ across domains within the network. We find that repair services are more effective than FEC at reducing bandwidth utilization in such environments. Furthermore, we show that adding FEC to a repair services protocol not only reduces buffer requirements at repair servers, but also reduces bandwidth utilization in domains with high loss, or in domains with large populations of receivers.", "keywords": "reliable multicast;forward error correction;repair services;active services;performance analysis", "title": "Improving reliable multicast using active parity encoding services"} {"abstract": "We evaluated eight different concept drift detectors. A 2k factorial design was used to indicate the best parameters for each method. Tests compared accuracy, evaluation time, false alarm and miss detection rates. A Mahalanobis distance is proposed as a metric to compare drift methods. DDM was the method that presented the best average results in all tested datasets.", "keywords": "data streams;time-changing data;concept drift detectors;comparison", "title": "A comparative study on concept drift detectors"} {"abstract": "This paper presents a fully automatic method for creating a 3D model from a single photograph. The model is made up of several texture-mapped planar billboards and has the complexity of a typical children's pop-up book illustration. Our main insight is that instead of attempting to recover precise geometry, we statistically model geometric classes defined by their orientations in the scene. Our algorithm labels regions of the input image into coarse categories: \"ground\", \"sky\", and \"vertical\". These labels are then used to \"cut and fold\" the image into a pop-up model using a set of simple assumptions. Because of the inherent ambiguity of the problem and the statistical nature of the approach, the algorithm is not expected to work on every image. However. it performs surprisingly well for a wide range of scenes taken from a typical person's photo album.", "keywords": "single-view reconstruction;image-based rendering;machine learning;image segmentation", "title": "automatic photo pop-up"} {"abstract": "This article designs H and H2 stabilisers, respectively, for linear time-invariant systems via static output feedback (SOF). A state coordinate transformation of controlled system generates a dummy system with lower dimension, which cannot be directly influenced by the SOF stabiliser. Then the H (H2) stabiliser via SOF may be obtained by solving proper linear matrix inequality (LMI). This LMI is feasible only if the dummy system has a state feedback stabiliser with the same H (H2) index. Meanwhile, a free matrix variable in coordinate transformation can act as the state feedback gain matrix. Hence after the design of dummy system, the SOF stabiliser can be determined if certain LMI is feasible. This method does not concern any conservative reduction or enlargement of matrix inequalities. Numerical examples show the validity of the proposed algorithms.", "keywords": "h2;static output feedback;coordinates transformation;lmi;optimal control;h control", "title": "H and H2 stabilisers via static output feedback based on coordinate transformations with free variables"} {"abstract": "A high- order sliding- mode observer is designed for linear time invariant systems with single output and unknown bounded single input. It provides for the global observation of the state and the output under sufficient and necessary conditions of strong observability or strong detectability. The observation is finite- time- convergent and exact in the strong observability case. The accuracy of the proposed observation and identification schemes is estimated via the sampling step or magnitude of deterministic noises. The results are extended to the multi- input multi- output case.", "keywords": "high order sliding modes;observation;identification", "title": "Observation of linear systems with unknown inputs via high-order sliding-modes"} {"abstract": "A noninvasive diagnostic device was developed to assess the vascular origin and severity of penile dysfunction. It was designed and studied using both a mathematical model of penile hemodynamics and preliminary experiments on healthy young volunteers. The device is based on the application of an external pressure (or vacuum) perturbation to the penis following the induction of erection. The rate of volume change while the penis returns to its natural condition is measured using a noninvasive system that includes a volume measurement mechanism that has very low friction, thereby not affecting the measured system. The rate of volume change (net flow) is obtained and analyzed. Simulations using a mathematical model show that the device is capable of differentiating between arterial insufficiency and venous leak and indicate the severity of each. In preliminary measurements on young healthy volunteers, the feasibility of the measurement has been demonstrated. More studies are required to confirm the diagnostic value of the measurements", "keywords": "erectile dysfunction;arterial insufficiency;venous leak;veno-occlusive mechanism;mathematical model;hemodynamics", "title": "A Mathematical Model of Penile Vascular Dysfunction and Its Application to a New Diagnostic Technique"} {"abstract": "Motivated by some spectral results in the characterization of concept lattices we investigate the spectra of reducible matrices over complete idempotent semifields in the framework of naturally-ordered semirings, or dioids. We find non-null eigenvectors for every non-null element in the semifield and conclude that the notion of spectrum has to be refined to encompass that of the incomplete semifield case so as to include only those eigenvalues with eigenvectors that have finite coordinates. Considering special sets of eigenvectors brings out finite complete lattices in the picture and we contend that such structure may be more important than standard eigenspaces for matrices over completed idempotent semifields.", "keywords": "matrix spectra;dioids;complete idempotent semifields;complete idempotent semimodules;spectral order lattices", "title": "The spectra of irreducible matrices over completed idempotent semifields"} {"abstract": "Health and how to support it with interactive computer systems, networks, and devices is a global and, for many countries, an explicit national priority. Significant interest in issues related to interactive systems for health has been demonstrated repeatedly within SIGCHI. A community focused on health started in 2010, fostering collaboration and dissemination of research findings as well as bridging with practitioners. As part of this community's on-going efforts, we will hold a special interest group session during ACM CHI 2011 to discuss, prioritize, and promote some of these most pressing issues facing the community.", "keywords": "fitness;assistive technologies;medicine;telecare;wellness;health;nutrition;health informatics", "title": "interactive technologies for health special interest group"} {"abstract": "The problem of cyclic sequence alignment is considered. Most existing optimal methods for comparing cyclic sequences are very time consuming. For applications where these alignments are intensively used, optimal methods axe seldom a feasible choice. The alternative to an exact and costly solution is to use a close-to-optimal but cheaper approach. In previous works, we have presented three suboptimal techniques inspired on the quadratic-time suboptimal algorithm proposed by Bunke and Buhler. Do these approximate approaches come sufficiently close to the optimal solution, with a considerable reduction in computing time? Is it thus worthwhile investigating these approximate methods? This paper shows that approximate techniques are good alternatives to optimal methods.", "keywords": "cyclic sequences;cyclic string matching;structural pattern analysis", "title": "Cyclic sequence alignments: Approximate versus optimal techniques"} {"abstract": "In this paper we investigate chip bonding technology of GaAs/AlGaAs quantum cascade lasers (QCLs). Its results have strong influence on final performance of devices and are essential for achieving room temperature operation. Various solders were investigated and compared in terms of their thermal resistance and induced stress. The spatially resolved photoluminescence technique has been applied for a device thermal analysis. The soldering quality was also investigated by means of a scanning acoustic microscopy. The particular attention has been paid to AuAu die bonding, which seems to be a promising alternative to the choice between hard and soft solder bonding of GaAs/AlGaAs QCLs operating from cryogenic temperatures up to room temperatures. A good quality direct AuAu bonding was achieved for bonding parameters comparable with the ones typical for AuSn eutectic bonding process. High performance room temperature operation of GaAs/AlGaAs QCLs has been achieved with the state-of-the-art parameters.", "keywords": "gaas/algaas quantum cascade laser;mounting technology;die-bonding;packaging;scanning accustic microscopy", "title": "Direct AuAu bonding technology for high performance GaAs/AlGaAs quantum cascade lasers"} {"abstract": "The InterGrid system aims to provide an execution environment for running applications on top of interconnected infrastructures. The system uses virtual machines as building blocks to construct execution environments that span multiple computing sites. Such environments can be extended to operate on cloud infrastructures, such as Amazon EC2. This article provides an abstract view of the proposed architecture and its implementation; experiments show the scalability of an InterGrid-managed infrastructure and how the system can benefit from using the cloud.", "keywords": "amazon ec2;cloud computing;grid computing;distributed systems;scheduling;resource management;virtualization;intergrid gateway", "title": "Harnessing Cloud Technologies for a Virtualized Distributed Computing Infrastructure"} {"abstract": "Input of the digital TV software is a transport stream (TS) in MPEG (Moving Picture Expert Group)-2 format, a standard specification for moving picture compression. We propose a method to thoroughly generate MPEG-2 TS test data, namely, a test stream based on the black-box test concept for digital TV software. We also introduce a tool to automate the test stream generation known as Auto-TEst data generator from Protocol standard (ATEP). This empirical study of the application of an ATEP-derived test stream to an actual digital TV software settop box should benefit digital TV software developers as well as other testers.", "keywords": "mpeg2-ts test stream;digital tv software test;black-box test", "title": "A method of MPEG2-TS test stream generation for digital TV software"} {"abstract": "Certain discrete probability distributions, used independently from each other in linguistics and other sciences, can be considered as special cases of the distribution based on the Lerch zeta function. We will list the probability functions for some of the most important cases. Moments and estimators are derived for the general Lerch distribution.", "keywords": "lerch zeta function;zipf distributions;estimators;nonlinear equation systems", "title": "UNIFIED REPRESENTATION OF ZIPF DISTRIBUTIONS"} {"abstract": "I review the differences between classical and quantum systems, emphasizing the connection between no-hidden variable theorems and superior computational power of quantum computers. Using quantum lattice gas automata as examples, I describe possibilities for efficient simulation of quantum and classical systems with a quantum computer. I conclude with a list of research directions. ", "keywords": "quantum simulation;quantum lattice gas automata", "title": "Physical quantum algorithms"} {"abstract": "This study proposes a slack-diversifying nonlinear fluctuation smoothing rule to reduce the average cycle time in a wafer fabrication factory. The slack-diversifying nonlinear fluctuation smoothing rule is derived from the one-factor tailored nonlinear fluctuation smoothing rule for cycle time variation (1f-TNFSVCT) by dynamically maximizing the standard deviation of the slack, which has been shown to improve scheduling performance in several previous studies. The effectiveness of the proposed rule has been validated via using it with a simulated data set. Based on the findings in this research we also derived several directions that can be exploited in the future.", "keywords": "wafer fabrication;dispatching rule;slack;diversify;fluctuation smoothing", "title": "A slack-diversifying nonlinear fluctuation smoothing rule for job dispatching in a wafer fabrication factory"} {"abstract": "In this letter, my proposals for a Floating node voltagecontrolled Variable Resistor circuit (FVR) are based upon its advantages as linear and compact. The performance of the proposed circuit was confirmed by PSpice simulation. The simulation results are reported in this letter.", "keywords": "analog integrated circuits;floating node;voltage-controlled variable resistor circuit", "title": "Linear and compact floating node voltage-controlled variable resistor circuit"} {"abstract": "The rapidly changing information technology (IT) environment continues to pose a challenging dilemma for both management information systems (MIS) managers and MIS educators at all levels, especially the collegiate level. This research examines the content of MIS-related job advertisements over a 20-year period: late 1970s-late 1990s. It is the continuation of a study initially published in The Journal of Computer Information Systems (6) and includes the data that represents the late 1990s timeframe. Results trace the rise and fall in demand for certain IT skills and knowledge and identify the growing strength or stability of others. The study clearly exposes the great diversity in the MIS job market. This diversity is the root cause of the dilemma confronting MIS managers and MIS educators as they try to recruit workers from or prepare students for the changing IT environment.", "keywords": "mis job market;mis job market diversity;mis skills;mis knowledge", "title": "The management information systems (MIS) job market late 1970s-late 1990s"} {"abstract": "Retention rate and digestive and performance effects of ceramic boluses (6620 mm, 65 g) enclosing passive transponders (32.53.8 mm) were studied in three experiments. Reading distances of transponders inside and outside the boluses (n=10) did not vary. In the first experiment, a total of 2452 boluses were applied to 74 lambs and 808 ewes, 16 young and 67 adult goats, 1138 calves and 349 cows. Plastic balling guns were used to insert the boluses and their effects were evaluated during 3 years or until slaughter. Time needed for application and recommended live-weights (LW) depended on animal category (sheep, 24 s and >25 kg; goats, 26 s and >20 kg; cattle, 19240 s and >30 kg). Application in calves was possible during the first week of life. Retention rates were 100, 98.8 and 99.7% in sheep, goats and cattle, respectively. The location of boluses in the reticulum was checked with hand-held readers and verified by X-ray in a sample (n=4) of each animal category or directly in cannulated cows (n=3). Transceivers were interfaced with electronic scales for automatic weight recording. Dynamic reading efficiency was 100% in race-ways with a frame antenna (9452 cm). Health and performances were not modified by boluses. An average of 93% of boluses were found in the reticulum at slaughter. Recovery rates and times varied according to animal category (lambs, 100% and 5 s; ewes and goats, 100% and 8 s; fattened calves, 91.3% and 12 s; dairy cows, 72% and 14 s). In the second experiment, two groups of adult ewes (control, n=5; bolus, n=5) were housed in individual pens and fed forage ad libitum. Mean forage intake and nutrient digestibility were not varied by the ceramic boluses. In the third experiment 45 fattening male lambs (20 kg LW) and 20 replacement ewe-lambs (30 kg LW) were used. Fattening lambs were divided into two groups and assigned to the treatments (control, n=25; bolus, n=20) until slaughter (25 kg LW). In spite of the difficulties observed in the force-feeding of boluses in eight lambs (40%), average daily gain and reticulum-rumen mucosa were not altered. Ewe-lambs were also assigned to the treatments in two groups (control, n=10; bolus, n=10) and monitored until first lambing or 1 year old. The weight, body condition score and reproductive performance were not affected by boluses. In conclusion, the use of the ceramic bolus is recommended as a safe and tamper-proof method for electronic identification of ruminants once the animals have reached a weight where successful administration is possible. Moreover, boluses proved to be useful for dynamic reading and automatic weight recording on farm conditions.", "keywords": "animal identification;ceramic bolus;reading range", "title": "Development of a ceramic bolus for the permanent electronic identification of sheep, goat and cattle"} {"abstract": "Although OWL is rather expressive, it has a very serious limitation on datatypes; i.e., it does not support customised datatypes. It has been pointed out that many potential users will not adopt OWL unless this limitation is overcome, and the W3C Semantic Web Best Practices and Development Working Group has set up a task force to address this issue. This paper makes the following two contributions: (i) it provides a brief summary of OWL-related datatype formalisms, and (ii) it provides a decidable extension of OWL DL, called OWL-Eu, that supports customised datatypes. A detailed proof of the decidability of OWL-Eu is presented.", "keywords": "ontologies;semantic web;description logics;customised datatypes;unary datatype groups", "title": "OWL-Eu: Adding customised datatypes into OWL"} {"abstract": "This paper examines how information technology (IT) transforrns relations across fields of practice within organizations. Drawing on Bourdieu's practice theory, we argue that the production of any practice involves varying degrees of embodiment (i.e., relying on personal relationships) and objectification (i.e., relying on the exchange of objects). We subsequently characterize boundary-spanning practices according to their relative degrees of embodiment and objectification. We distinguish between \"market-like\" boundary-spanning practices, which rely primarily on an objectified mode of practice production, from \"community-like\" practices, which involve mostly the embodied mode of practice production. IT is then conceptualized as a medium for sharing objects in the production of practices. As such, IT use allows for the sharing of objects without relying on embodied relationships. We use data from an in-depth ethnographic case study to investigate how IT was used to transform community-like boundary-spanning practices within an organization into market-like ones. Moreover, we demonstrate how, as IT was used to support the exchange and combination of depersonalized objects, other aspects of the practice (such as the roles of intermediaries and the nature of meetings) also changed. The related changes in these diverse aspects of a boundary-spanning practice supported the trend toward greater objectification. IT use also increased visibility of the terms associated with object exchange. This increased visibility exposed the inequity of the exchange and encouraged the disadvantaged party to renegotiate the relationship.", "keywords": "boundary objects;boundary spanners;boundary spanning;communities of practice;coordination mechanisms;information technology use;practice theory;qualitative methods", "title": "Turning a community into a market: A practice perspective on information technology use in boundary spanning"} {"abstract": "Cardiovascular disease (CVD) causes unaffordable social and health costs that tend to increase as the European population ages. In this context, clinical guidelines recommend the use of risk scores to predict the risk of a cardiovascular disease event. Some useful tools have been developed to predict the risk of occurrence of a cardiovascular disease event (e.g. hospitalization or death). However, these tools present some drawbacks. These problems are addressed through two methodologies: (i) combination of risk assessment tools: fusion of nave Bayes classifiers complemented with a genetic optimization algorithm and (ii) personalization of risk assessment: subtractive clustering applied to a reduced-dimensional space to create groups of patients. Validation was performed based on two ACS-NSTEMI patient data sets. This work improved the performance in relation to current risk assessment tools, achieving maximum values of sensitivity, specificity, and geometric mean of, respectively, 79.8, 83.8, and 80.9%. Additionally, it assured clinical interpretability, ability to incorporate of new risk factors, higher capability to deal with missing risk factors and avoiding the selection of a standard CVD risk assessment tool to be applied in the clinical practice.", "keywords": "information and knowledge management;management of cardiovascular diseases;decision-support systems", "title": "Integration of Different Risk Assessment Tools to Improve Stratification of Patients with Coronary Artery Disease"} {"abstract": "Several literatures presented automated systems for detecting or classifying sewer pipe defects based on morphological features of pipe defects. In those automated systems, however, the morphologies of the darker center or some uncertain objects on CCTV images are also segmented and become noises while morphology-based pipe defect segmentation is implemented. In this paper, the morphology-based pipe defect segmentation is proposed and discussed to be an improved approach for automated diagnosis of pipe defects on CCTV images. The segmentation of pipe defect morphologies is first to implement an opening operation for gray-level CCTV images to distinguish pipe defects. Then, Otsu's technique is used to segment pipe defects by determining the optimal thresholds for gray-level CCTV images of opening operation. Based on the segmentation results of CCTV images, the ideal morphologies of four typical pipe defects are defined. If the segmented CCTV images match the definition of those ideal morphologies, the pipe defects on those CCTV images can be successfully identified by a radial basis network (RBN) based diagnostic system. As for the rest CCTV images failing to match the ideal morphologies, the failure causes was discussed so to suggest a regulation for imaging conditions, such as camera pose and light source, in order to obtain CCTV images for successful segmentation. ", "keywords": "cctv;image processing;morphologies of pipe defects;diagnostic system", "title": "Segmenting ideal morphologies of sewer pipe defects on CCTV images for automated diagnosis"} {"abstract": "Word sense disambiguation (WSD) can be thought of as the most challenging task in the process of machine translation. Various supervised and unsupervised learning methods have already been proposed for this purpose. In this paper, we propose a new efficient fuzzy classification system in order to be applied for WSD. In order to optimize the generalization accuracy, we use rule-weight as a simple mechanism to tune the classifier and propose a new learning method to iteratively adjust the weight of fuzzy rules. Through computer simulations on TWA data as a standard corpus, the proposed scheme shows a uniformly good behavior and achieves results which are comparable or better than other classification systems, proposed in the past.", "keywords": "word sense disambiguation;machine translation;fuzzy systems;classification;rule-weight;generalization accuracy", "title": "A new fuzzy rule-based classification system for word sense disambiguation"} {"abstract": "Conservation laws in cellular automata (CA) are studied as an abstraction of the conservation laws observed in nature. In addition to the usual real-valued conservation laws we also consider more general group-valued and semigroup-valued conservation laws. The (algebraic) conservation laws in a CA form a hierarchy, based on the range of the interactions they take into account. The conservation laws with smaller interaction ranges are the homomorphic images of those with larger interaction ranges, and for each specific range there is a most general law that incorporates all those with that range. For one-dimensional CA, such a most general conservation law haseven in the semigroup-valued casean effectively constructible finite presentation, while for higher-dimensional CA such effective construction exists only in the group-valued case. It is even undecidable whether a given two-dimensional CA conserves a given semigroup-valued energy assignment. Although the local properties of this hierarchy are tractable in the one-dimensional case, its global properties turn out to be undecidable. In particular, we prove that it is undecidable whether this hierarchy is trivial or unbounded. We point out some interconnections between the structure of this hierarchy and the dynamical properties of the CA. In particular, we show that positively expansive CA do not have non-trivial real-valued conservation laws.", "keywords": "cellular automata;conservation laws;energy;reversibility;undecidability;dynamical systems;chaos", "title": "On the hierarchy of conservation laws in a cellular automaton"} {"abstract": "By using a continuation theorem based on coincidence degree theory, we obtain some new sufficient conditions for the existence of positive periodic solutions for the neutral ratio-dependent predatorprey model with Holling type II functional response.", "keywords": "predatorprey model;ratio-dependent;periodic solution;neutral;coincidence degree", "title": "Positive periodic solutions for the neutral ratio-dependent predatorprey model"} {"abstract": "Multigate structures have better short channel control than conventional bulk devices due to increased gate electrostatic control. FinFET is a promising candidate among multigate structures due to its ease of manufacturability. The RF performance of FinFET is affected by gate controlled parameters such as transconductance, output conductance and total gate capacitance. In this paper we have used dual-k spacers in underlap FinFETs to improve the gate electrostatic integrity. The inner high-k spacer helps in better screening out the gate sidewall fringing fields, thereby, increasing transconductance and reducing output conductance with increase in total gate capacitance. At 16nm gate lengths, we have observed that, the intrinsic gain of dual-k spacer based FinFET can be increased by more than 100% (>6dB) without affecting cutoff frequency and maximum oscillation frequency, as compared to conventional single spacer based FinFET. Improvement in cutoff frequency by 11% and maximum oscillation frequency by 5% can be achieved, when the gate lengths are scaled down to 12nm, in addition to 2.75 times (8.8dB) increase in intrinsic gain.", "keywords": "short channel effect ;dual-k spacer;figures of merit ;electrostatic integrity ;intrinsic gain;cutoff frequency", "title": "Impact of dual-k spacer on analog performance of underlap FinFET"} {"abstract": "A vision based approach for calculating accurate 3D models of the objects is presented. Generally industrial visual inspection systems capable of accurate 3D depth estimation rely on extra hardware tools like laser scanners or light pattern projectors. These tools improve the accuracy of depth estimation but also make the vision system costly and cumbersome. In the proposed algorithm, depth and dimensional accuracy of the produced 3D depth model depends on the existing reference model instead of the information from extra hardware tools. The proposed algorithm is a simple and cost effective software based approach to achieve accurate 3D depth estimation with minimal hardware involvement. The matching process uses the well-known coarse to fine strategy, involving the calculation of matching points at the coarsest level with consequent refinement up to the finest level. Vector coefficients of the wavelet transform-modulus are used as matching features, where wavelet transform-modulus maxima defines the shift invariant high-level features with phase pointing to the normal of the feature surface. The technique addresses the estimation of optimal corresponding points and the corresponding 2D disparity maps leading to the creation of accurate depth perception model. ", "keywords": "wavelet transform modulus;coarse to fine;disparity;3d depth", "title": "3D depth estimation for visual inspection using in wavelet transform modulus maxima"} {"abstract": "Providing real-time voice support over multihop ad hoc wireless networks (AWNS) is a challenging task. The standard retransmission-based strategies proposed in the literature are poorly matched to voice applications because of timeliness and large overheads involved in transmitting small-sized voice packets. To make a voice application feasible over AWNS, the perceived voice quality must be improved while not significantly increasing the packet overhead. We suggest packet-level media-dependent adaptive forward error correction (FEC) at the application layer in tandem with multipath transport for improving the voice quality. Since adaptive FEC masks packet losses in the network, at the medium access control (MAC) layer, we avoid retransmissions (hence, no acknowledgments) in order to reduce the control overhead and end-to-end delay. Further, we exploit the combined strengths of layered coding and multiple description (MD) coding for supporting error-resilient voice communication in AWNS. We propose an efficient packetization scheme in which the important substream of the voice stream is protected adaptively with FEC depending on the loss rate present in the network and is transmitted over two maximally node-disjoint paths. The less important substream of the voice stream is encoded into two descriptions, which are then transmitted over two maximally node-disjoint paths. The performance of our scheme (packet-level media-dependent adaptive FEC scheme) is evaluated in terms of two parameters: residual packet loss rate (RPLR, packet loss rate after FEC recovery) and average burst length (ABL, average length of consecutive packet losses after FEC recovery) of voice data after FEC recovery. The sets of equations leading to the analytical formulation of both RPLR and ABL are first given for a renewal error process. The values of both these parameters depend on FEC-Offset (r, the distance between original voice frame and piggybacked redundant voice frame) and loss rate present in the network. Then, these parameters are computed for a Gilbert-Elliott (GE) two-state Markov error model and compared with experimental data. Our scheme adaptively selects the FEC-Offset (it chooses r that minimizes RPLR and ABL as much as possible) based on the loss rate feedback obtained from the destination. The proposed scheme achieves significant gains in terms of reduced frame loss rate (FLR), reduced control overhead, and minimum end-to-end delay and almost doubles the perceived voice quality compared to the existing approaches.", "keywords": "ad hoc networks;voice frame;layered coding;multiple description coding;forward error correction;packetization scheme;voice quality;multipath transport;multimedia", "title": "Adaptive FEC-based packet loss resilience scheme for supporting voice communication over ad hoc wireless networks"} {"abstract": "This paper presents the data schema required to capture fundamental elements of design information in a heterogeneous repository supporting design reuse. Design information captured by the repository can be divided into seven main categories of artifact-, function-, failure-, physical-, performance-, sensory- and media-related information types. Each of the seven types of design information is described in detail. The repository schema is specific to a relational database system driving the implemented design repository; however, the types of design information recorded are applicable to any implementation of a design repository. The aim of this paper is to fully describe the data schema such that it could be recreated or specialized for industrial or research applications. The result is a complete description of fundamental design knowledge to support design reuse and a data schema specification. The data schema has been vetted with the implemented design repository that contains design information for over 100 consumer electro-mechanical products.", "keywords": "design repository schema;conceptual design", "title": "Introduction of a data schema to support a design repository"} {"abstract": "The main goal of this paper is to study the finite groups whose lattices of fuzzy subgroups are distributive. We obtain a characterization of these groups which is similar to a well-known result of group theory. ", "keywords": "fuzzy subgroup lattices;subgroup lattices;distributivity;finite cyclic groups;equivalence relations", "title": "Distributivity in lattices of fuzzy subgroups"} {"abstract": "Workflow systems have long been of interest to computer science researchers due to their practical relevance. Supporting delegation mechanisms in workflow systems is receiving increasing research interest. In this paper, we conduct a comprehensive study of user delegation operations in computerized workflow systems. In a workflow system, the semantics of a delegation operation are largely based on three factors: the underlying workflow execution model, task type and delegation type. We describe three different workflow execution models and examine the effect of various delegation operations in each workflow execution model. We then extend our workflow execution models to examine the effect of various delegation operations in different role-based workflow execution models.", "keywords": "delegation;workflow management systems", "title": "on delegation and workflow execution models"} {"abstract": "Detection of anomalies is a broad field of study, which is applied in different areas such as data monitoring, navigation, and pattern recognition. In this paper we propose two measures to detect anomalous behaviors in an ensemble of classifiers by monitoring their decisions; one based on Mahalanobis distance and another based on information theory. These approaches are useful when an ensemble of classifiers is used and a decision is made by ordinary classifier fusion methods, while each classifier is devoted to monitor part of the environment. Upon detection of anomalous classifiers we propose a strategy that attempts to minimize adverse effects of faulty classifiers by excluding them from the ensemble. We applied this method to an artificial dataset and sensor-based human activity datasets, with different sensor configurations and two types of noise (additive and rotational on inertial sensors). We compared our method with two other well-known approaches, generalized likelihood ratio (GLR) and One-Class Support Vector Machine (OCSVM), which detect anomalies at data/feature level. We found that our method is comparable with GLR and OCSVM. The advantages of our method compared to them is that it avoids monitoring raw data or features and only takes into account the decisions that are made by their classifiers, therefore it is independent of sensor modality and nature of anomaly. On the other hand, we found that OCSVM is very sensitive to the chosen parameters and furthermore in different types of anomalies it may react differently. In this paper we discuss the application domains which benefit from our method.", "keywords": "anomaly detection;classifier ensemble;decision fusion;human activity recognition", "title": "On-line anomaly detection and resilience in classifier ensembles"} {"abstract": "This article considers the problem of scheduling a given set of independent jobs on unrelated parallel machines to minimize the total weighted tardiness. The problem is known to be NP-hard in the strong sense. Efficient lower and upper bounds are developed. The lower bound is based on the solution of an assignment problem, while the upper bound is obtained by a two-phase heuristic. A branch-and-bound algorithm that incorporates various dominance rules is presented. Computational experiments are conducted to demonstrate the performance of the proposed algorithm. Scope and purpose Parallel machine scheduling models are important from both the theoretical and practical points of view. From the theoretical point of view, they generalize the single machine scheduling models. From the practical point of view, they are important because the occurrence of a bank of machines in parallel is common in industries. In this article, the unrelated parallel machine total weighted tardiness scheduling problem is examined. The tardiness criterion has many applications in real world. This problem is difficult to solve. A branch-and-bound algorithm that incorporates various dominance rules along with efficient lower and upper bounds is proposed to find an optimal solution.", "keywords": "scheduling;parallel machines;branch-and-bound;tardiness", "title": "Scheduling unrelated parallel machines to minimize total weighted tardiness"} {"abstract": "We present a practical lock-free shared data structure that efficiently implements the operations of a concurrent deque as well as a general doubly linked list. The implementation supports parallelism for disjoint accesses and uses atomic primitives which are available in modern computer systems. Previously known lock-free algorithms of doubly linked lists are either based on non-available atomic synchronization primitives, only implement a subset of the functionality, or are not designed for disjoint accesses. Our algorithm only requires single-word compare-and-swap atomic primitives, supports fully dynamic list sizes, and allows traversal also through deleted nodes and thus avoids unnecessary operation retries. We have performed an empirical study of our new algorithm on two different multiprocessor platforms. Results of the experiments performed under high contention show that the performance of our implementation scales linearly with increasing number of processors. Considering deque implementations and systems with low concurrency, the algorithm by Michael shows the best performance. However, as our algorithm is designed for disjoint accesses, it performs significantly better on systems with high concurrency and non-uniform memory architecture.", "keywords": "deque;doubly linked list;non-blocking;lock-free;shared data structure;multi-thread;concurrent", "title": "Lock-free deques and doubly linked lists"} {"abstract": "The explosive increase in data demand coupled with the rapid deployment of various wireless access technologies have led to the increase of number of multi-homed or multi-interface enabled devices. Fully exploiting these interfaces has motivated researchers to propose numerous solutions that aggregate their available bandwidths to increase overall throughput and satisfy the end-users growing data demand. These solutions, however, do not utilize their interfaces to the maximum without network support, and more importantly, have faced a steep deployment barrier. In this paper, we propose an optimal deployable bandwidth aggregation system (DBAS) for multi-interface enabled devices. We present the DBAS architecture that does not introduce any intermediate hardware, modify current operating systems, modify socket implementations, nor require changes to current applications or legacy servers. The DBAS architecture is designed to automatically estimate the characteristics of applications and dynamically schedule various connections and/or packets to different interfaces. We also formulate our optimal scheduler as a mixed integer programming problem yielding an efficient solution. We evaluate DBAS via implementation on the Windows OS and further verify our results with simulations on NS2. Our evaluation shows that, with current Internet characteristics, DBAS reaches the throughput upper bound with no modifications to legacy servers. It also highlights the significant enhancements in the response time introduced by DBAS, which directly enhances the user experience.", "keywords": "bandwidth aggregation;multiple network interfaces;throughput;optimization;multihoming", "title": "An optimal deployable bandwidth aggregation system"} {"abstract": "Several dynamic bandwidth allocation algorithms have been introduced to schedule upstream wavelength channels in wavelength division multiplexing Ethernet passive optical networks (WDM EPONs), but mostly for homogenous WDM EPON networks with the same distance between each optical network unit (ONU) and the optical line terminal. For WDM EPON with heterogeneous round trip times (RTTs), we propose two algorithms for ONU scheduling, called nearest first and early allocation (EA); and a wavelength assignment algorithm, called best fit (BF). Both ONU scheduling algorithms take RTT dissimilarities into account, thus minimizing packet delay and packet drop ratio at ONUs. Additionally, EA can relive the common drawback of offline scheduling, i.e., channel idle time. On the other hand, the BF wavelength assignment is proposed that assigns the best wavelength to each ONU in order to improve network performances in terms of packet queuing delay and packet drop ratio at ONU sides.", "keywords": "heterogeneous wdm eopn;dynamic bandwidth allocation;nearest first;early allocation;wavelength assignment", "title": "Dynamic bandwidth allocation in heterogeneous WDM EPONs"} {"abstract": "Collaboration is presented as a form of knowledge sharing and hence of knowledge diffusion. A layered framework for collaboration studies is proposed. The notions of relative and absolute proper essential node (PEN) centrality are introduced as indicators of a node's importance for diffusion of knowledge through collaboration.", "keywords": "collaboration;diffusion;layered systems;networks;centrality indicators", "title": "A layered framework to study collaboration as a form of knowledge sharing and diffusion"} {"abstract": "In this paper finite state recognizers are considered as unary tree recognizers with unary operational symbols. We introduce translation recognizers of a tree recognizer, which are finite state recognizers whose operations are the elementary translations of the underlying algebra of the considered tree recognizer. In terms of translation recognizers we give general conditions under which a class of recognizable tree languages with a given property can be determined by a class of monoids determining the class of string languages having the same property.", "keywords": "tree automata;tree languages;syntactic monoids", "title": "Classes of tree languages determined by classes of monoids"} {"abstract": "A model of computation is defined over the algebraic numbers and over number fields. This model is non-uniform, and the cost of operations depends on the height of the operands and on the degree of the extension of the rational defined by those operands. A transfer theorem for the P not equal NP Conjecture is proved, namely: P not equal NP in this model over the real algebraic numbers if and only if P not equal NP in the classical setting. ", "keywords": "np-completeness;computability over a ring;height;transfer theorem", "title": "On a transfer theorem for the P not equal NP conjecture"} {"abstract": "The emerging 4G (fourth generation) networks featuring wider coverage, higher transmission bandwidth and easier deployment have a desirable potential to serve ubiquitous and pervasive multimedia applications in creating new user-centric communication services. However, the practical implementation of 4G network to demonstrate such potential, especially for delivering real-time and high-quality video services, is scarce. This paper therefore provides the design and implementation of a future Internet live TV system over 4G networks to achieve cost-effectiveness, instead of using expensive satellite news gathering (SNG) vehicle and costly satellite transmissions in traditional TV stations. To effectively provide live TV services, we apply not only the hybrid duplex modes but also the port-based VLAN on the deployed networks for maximizing bandwidth, minimizing signal interference, and guaranteeing QoS of differentiated services. Performance metrics are applied to demonstrate that the proposed solution is cost-effective and is feasible for live TV services in future Internet.", "keywords": "future internet;4g networks;live tv system;satellite news gathering service;vlan", "title": "The design and implementation of a future Internet live TV system over 4G networks"} {"abstract": "In this paper, we study a stochastic particle system that describes homogeneous gasphase reactions of a number of chemical species. First, we introduce the system as a Markov jump process and discuss how relevant physical quantities are represented in terms of appropriate random variables. Then, we show how various deterministic equations, used in the literature, are derived from the stochastic system in the limit when the number of particles goes to infinity. Finally, we apply the corresponding stochastic algorithm to a toy problem, a simple formal reaction mechanism, and a real combustion problem. This problem is given by the isothermal combustion of a homogeneous mixture of heptane and air modelled by a detailed reaction mechanism with 107 chemical species and 808 reversible reactions. Heptane as described in this chemical mechanism serves as model-fuel for different types of internal combustion engines. In particular, we study the order of convergence with respect to the number of simulation particles, and illustrate the limitations of the method. ", "keywords": "stochastic particle method;combustion;convergence;efficiency", "title": "Numerical study of a stochastic particle method for homogeneous gas-phase reactions"} {"abstract": "Developed a weighted Nitsche stabilized method for embedded interfaces with junctions. Provided an explicit expression for the method parameter for lower order elements in the presence of junctions. Examples highlight the method capabilities in modeling grain-boundary sliding behavior.", "keywords": "frictional contact;grain-boundary sliding;junctions;nitsche;polycrystalline;x-fem", "title": "A Nitsche stabilized finite element method for frictional sliding on embedded interfaces. Part II: Intersecting interfaces"} {"abstract": "In this paper we prove that, under suitable conditions, Atanassovs K? operators, which act on intervals, provide the same numerical results as OWA operators of dimension two. On one hand, this allows us to recover OWA operators from K? operators. On the other hand, by analyzing the properties of Atanassovs operators, we can generalize them. In this way, we introduce a class of aggregation functions the generalized Atanassov operators that, in particular, include two-dimensional OWA operators. We investigate under which conditions these generalized Atanassov operators satisfy some properties usually required for aggregation functions, such as bisymmetry, strictness, monotonicity, etc. We also show that if we apply these aggregation functions to interval-valued fuzzy sets, we obtain an ordered family of fuzzy sets.", "keywords": "OWA operators;Interval-valued fuzzy sets operators;Generalized K;operators;Dispersion", "title": "A class of aggregation functions encompassing two-dimensional OWA operators"} {"abstract": "We investigate the behavior of data structures when the input and operations are generated by an event graph. This model is inspired by Markov chains. We are given a fixed graph G, whose nodes are annotated with operations of the type insert, delete, and query. The algorithm responds to the requests as it encounters them during a (random or adversarial) walk in G. We study the limit behavior of such a walk and give an efficient algorithm for recognizing which structures can be generated. We also give a near-optimal algorithm for successor searching if the event graph is a cycle and the walk is adversarial. For a random walk, the algorithm becomes optimal.", "keywords": "successor searching;markov chain;low entropy;data structure", "title": "Data Structures on Event Graphs"} {"abstract": "We propose a GARCH model to represent the clutter in radar applications. We fit this model to real sea clutter data and we show that it represents adequately the statistics of the data. Then, we develop a detection test based on this model. Using synthetic and real radar data, we evaluate its performance and we show that the proposed detector offers higher probability of detection for a specified value of probability of false alarm than tests based on Gaussian and Weibull models, especially for low signal to clutter ratios.", "keywords": "radar;detection;non-gaussian clutter;garch processes", "title": "Radar detection algorithm for GARCH clutter model"} {"abstract": "Non-photorealistic techniques are usually applied to produce stylistic renderings. In visualization, these techniques are often able to simplify data, producing clearer images than traditional visualization methods. We investigate the use of particle systems for visualizing volume datasets using non-photorealistic techniques. In our VolumeFlies framework, user-selectable rules affect particles to produce a variety of illustrative styles in a unified way. The techniques presented do not require the generation of explicit intermediary surfaces.", "keywords": "visualization;non-photorealistic rendering;volume rendering;particle systems", "title": "Particle-based non-photorealistic volume visualization"} {"abstract": "Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework is provided for sequential multiblock methods, including hierarchical PCA (HPCA; two versions), consensus PCA (CPCA; two versions) and generalized PCA (GPCA). Properties of the methods are derived and characteristics of the methods are discussed. All this is illustrated with a real five-block example from chromatography. The only methods with clear optimization criteria are GPCA and one version of CPCA. Of these, GPCA is shown to give inferior results compared with CPCA. ", "keywords": "multiblock methods;hierarchical pca;consensus pca;generalized pca;multiway methods;stationary phases;reversed phase liquid chromatography", "title": "A framework for sequential multiblock component methods"} {"abstract": "Purpose - The paper seeks to reconsider open access and its relation to issues of \"development\" by highlighting the ties the open access movement has with the hegemonic discourse of development and to question some of the assumptions about science and scientific communication upon which the open access debates are based. The paper also aims to bring out the conflict arising from the convergence of the hegemonic discourses of science and development with the contemporary discourse of openness. Design/methodology/approach - The paper takes the form of a critical reading of a range of published work on open access and the so-called \"developing world\" as well as of various open access declarations. The argument is supported by insights from post-development studies. Findings - Open access is presented as an issue of moral concern beyond the narrow scope of scholarly communication. Claims are made based on hegemonic discourses that are positioned as a priori and universal. The construction of open access as an issue of unquestionable moral necessity also impedes the problematisation of its own heritage. Originality/value - This paper is intended to open up the view for open access's less obvious alliances and conflicting discursive ties and thus to initiate a politisation, which is necessary in order to further the debate in a more fruitful way.", "keywords": "developing countries;sciences;communication technologies;journal publishers", "title": "Of the rich and the poor and other curious minds: on open access and \"development\""} {"abstract": "In continuous optimisation, surrogate models (SMs) are used when tackling real-world problems whose candidate solutions are expensive to evaluate. In previous work, we showed that a type of SMs - radial basis function networks (RBFNs) - can be rigorously generalised to encompass combinatorial spaces based in principle on any arbitrarily complex underlying solution representation by extending their natural geometric interpretation from continuous to general metric spaces. This direct approach to representations does not require a vector encoding of solutions, and allows us to use SMs with the most natural representation for the problem at hand. In this work, we apply this framework to combinatorial problems using the permutation representation and report experimental results on the quadratic assignment problem.", "keywords": "radial-basis functions;representations;surrogate model optimization", "title": "geometric surrogate-based optimisation for permutation-based problems"} {"abstract": "Bacterial Hfq is a highly conserved thermostable protein of about 10kDa. The Hfq protein was discovered in 1968 as an E. coli host factor that was essential for replication of the bacteriophage Q?. It is now clear that Hfq has many important physiological roles. In E. coli, Hfq mutants show a multiple stress response related phenotypes. Hfq is now known to regulate the translation of two major stress transcription factors RpoS and RpoE in Enterobacteria and mediates its plieotrophic effects through several mechanisms. It interacts with regulatory sRNA and facilitates their antisense interaction with their targets. It also acts independently to modulate mRNA decay and in addition acts as a repressor of mRNA translation. Recent paper from Arluison et al. [9] provided the first evidence indicating that Hfq is an ATP-binding protein. They determined a plausible ATP-binding site in Hfq and tested Hfq's ATP-binding affinity and stoichiometry. Experimental data suggest that the ATP-binding by the HfqRNA complex results in its significant destabilization of the protein and the result also proves important role of Tyr25 that flanks the cleft and stabilizes the adenine portion of ATP, possibly via aromatic stacking. In our study, the ATP molecule was docked into the predicted binding cleft using GOLD docking software. The binding nature of ATP and its effect on HfqRNA complex was studied using molecular dynamics simulations. Importance of Tyr25 residue was monitored and revealed using mutational study on the modeled systems. Our data and the corresponding results point to one of Hfq functional structural consequences due to ATP binding and Tyr25Ala mutation.", "keywords": "host factor protein-hfq;atp;oligoribonucleotide;rna;post-transcriptional regulation;molecular dynamics simulation;mutation;aromatic stacking;destabilization", "title": "Computational approach to ensure the stability of the favorable ATP binding site in E. coli Hfq"} {"abstract": "There are several SDL methodologies that offer full system life-cycle support. Only few of them consider software reuse, not to mention high-level reuse of architecture and design. However, software reuse is a proven software engineering paradigm leading to high quality and reduced development effort. Experience made it apparent that beyond the more traditional reuse of code especially high-level reuse of architecture and design (as in the case of design patterns or frameworks) has the potential of achieving more systematic and widespread reuse. This paper presents the SDL pattern approach, a design methodology for distributed systems which integrates SDL-based system development with the pattern paradigm. It supports reuse of design knowledge modeled as SDL patterns and concentrates on the design phase of SDL-based system development. In order to get full life-cycle support, the pattern-based design process can be integrated within existing SDL methodologies.", "keywords": "sdl;design methodology;software reuse;patterns;distributed systems;process model", "title": "The SDL pattern approach a reuse-driven SDL design methodology"} {"abstract": "Repairing a database means bringing the database in accordance with a given set of integrity constraints by applying some minimal change. If a database can be repaired in more than one way, then the consistent answer to a query is defined as the intersection of the query answers on all repaired versions of the database. Earlier approaches have confined the repair work to deletions and insertions of entire tuples. We propose a theoretical framework that also covers updates as a repair primitive. Update-based repairing is interesting in that it allows rectifying an error within a tuple without deleting the tuple, thereby preserving consistent values in the tuple. Another novel idea is the construct of nucleus: a single database that yields consistent answers to a class of queries, without the need for query rewriting. We show the construction of nuclei for full dependencies and conjunctive queries. Consistent query answering and constructing nuclei is generally intractable under update-based repairing. Nevertheless, we also show some tractable cases of practical interest.", "keywords": "theory;consistent query answering;database repairing", "title": "Database repairing using updates"} {"abstract": "The modern trend of diversification and personalization has encouraged people to boldly express their differentiation and uniqueness in many aspects, and one of the noticeable evidences is the wide variety of hairstyles that we could observe today. Given the needs for hairstyle customization, approaches or systems, ranging from 2D to 3D, or from automatic to manual, have been proposed or developed to digitally facilitate the choice of hairstyles. However, nearly all existing approaches suffer from providing realistic hairstyle synthesis results. By assuming the inputs to be 2D photos, the vividness of a hairstyle re-synthesis result relies heavily on the removal of the original hairstyle, because the co-existence of the original hairstyle and the newly re-synthesized hairstyle may lead to serious artifact on human perception. We resolve this issue by extending the active shape model to more precisely extract the entire facial contour, which can then be used to trim away the hair from the input photo. After hair removal, the facial skin of the revealed forehead needs to be recovered. Since the skin texture is non-stationary and there is little information left, the traditional texture synthesis and image inpainting approaches do not fit to solve this problem. Our proposed method yields a more desired facial skin patch by first interpolating a base skin patch, and followed by a non-stationary texture synthesis. In this paper, we also would like to reduce the user assistance during such a process as much as possible. We have devised a new and friendly facial contour and hairstyle adjusting mechanism that make it extremely easy to manipulate and fit a desired hairstyle onto a face. In addition, our system is also equipped with the functionality of extracting the hairstyle from a given photo, which makes our work more complete. Moreover, by extracting the face from the input photo, our system allows users to exchange faces as well. In the end of this paper, our re-synthesized results are shown, comparisons are made, and user studies are conducted as well to further demonstrate the usefulness of our system.", "keywords": "active shape model;skin texture synthesis;hairstyle extraction", "title": "Simulation of face/hairstyle swapping in photographs with skin texture synthesis"} {"abstract": "Providing context-aware Web services refers to an adaptive process of delivering contextually matched Web services to meet service requesters needs at the moment. This article presents an ontology-based context model that enables formal description and acquisition of contextual information pertaining to both service requesters and services. The context model is supported by context query and phased acquisition techniques. Re also report two context-aware Web services built on top of our context model to demonstrate how the model can be used to facilitate Web services discovery and Web content adaptation. Implementation details of the context elicitation system and the evaluation results of context-aware services provision are also reported.", "keywords": "context-aware;owl-s;portable devices;service oriented architecture;ubiquitous;web services", "title": "Ubiquitous provision of context-aware web services"} {"abstract": "Extracting representative information is of great interest in data queries and web applications nowadays, where approximate match between attribute values/records is an important issue in the extraction process. This paper proposes an approach to extracting representative tuples from data classes under an extended possibility-based data model, and to introducing a measure (namely, relation compactness) based upon information entropy to reflect the degree that a relation is compact in light of information redundancy. Theoretical analysis and data experiments show that the approach has desirable properties that: 1) the set of representative tuples has high degrees of compactness (less redundancy) and coverage (rich content); 2) it provides a way to obtain data query outcomes of different sizes in a flexible manner according to user preference; and 3) the approach is also meaningful and applicable to web search applications.", "keywords": "flexible data queries;information equivalence;relation compactness;representativeness;web search", "title": "Extracting Representative Information to Enhance Flexible Data Queries"} {"abstract": "This paper reports on a new approach to solving a subset-based points-to analysis for Java using Binary Decision Diagrams (BDDs). In the model checking community, BDDs have been shown very effective for representing large sets and solving very large verification problems. Our work shows that BDDs can also be very effective for developing a points-to analysis that is simple to implement and that scales well, in both space and time, to large programs.The paper first introduces BDDs and operations on BDDs using some simple points-to examples. Then, a complete subset-based points-to algorithm is presented, expressed completely using BDDs and BDD operations. This algorithm is then refined by finding appropriate variable orderings and by making the algorithm propagate sets incrementally, in order to arrive at a very efficient algorithm. Experimental results are given to justify the choice of variable ordering, to demonstrate the improvement due to incrementalization, and to compare the performance of the BDD-based solver to an efficient hand-coded graph-based solver. Finally, based on the results of the BDD-based solver, a variety of BDD-based queries are presented, including the points-to query.", "keywords": "point;order;communities;examples;efficiency;analysis;performance;points-to analysis;timing;paper;binary decision diagrams;incremental;program;variability;model checking;experimentation;space;verification;operability;queries;demonstrate;graph;algorithm;effect;query;completeness", "title": "points-to analysis using bdds"} {"abstract": "In this paper, admission control by a fuzzy Q-learning technique is proposed for WCDMA/WLAN heterogeneous networks with multimedia traffic. The fuzzy Q-learning admission control (FQAC) system is composed of a neural-fuzzy inference system (NFIS) admissibility estimator, an NFIS dwelling estimator, and a decision maker. The NFIS admissibility estimator takes essential system measures into account to judge how each reachable subnetwork can support the admission request's required QoS and then output admissibility costs. The NFIS dwelling estimator considers the Doppler shift and the power strength of the requested user to assess his/her dwell time duration in each reachable subnetwork and then output dwelling costs. Also, in order to minimize the expected maximal cost of the user's admission request, a minimax theorem is applied in the decision maker to determine the most suitable subnetwork for the user request or to reject. Simulation results show that FQAC can always maintain the system QoS requirement up to a traffic intensity of 1.1 because it can appropriately admit or reject the users' admission requests. Also, the FQAC can achieve lower blocking probabilities than conventional JSAC proposed in [20] and can significantly reduce the handoff rate by 15-20 percent.", "keywords": "fuzzy q-learning;admission control;handoff;heterogeneous network", "title": "Fuzzy Q-Learning Admission Control for WCDMA/WLAN Heterogeneous Networks with Multimedia Traffic"} {"abstract": "A numerical model NEWTANK (Numerical Wave TANK) has been developed to study three-dimensional (3-D) non-linear liquid sloshing with broken free surfaces. The numerical model solves the spatially averaged NavierStokes equations, which are constructed on a non-inertial reference frame having arbitrary six degree-of-freedom (DOF) of motions, for two-phase flows. The large-eddy-simulation (LES) approach is adopted to model the turbulence effect by using the Smagorinsky sub-grid scale (SGS) closure model. The two-step projection method is employed in the numerical solutions, aided by the Bi-CGSTAB technique to solve the pressure Poisson equation for the filtered pressure field. The second-order accurate volume-of-fluid (VOF) method is used to track the distorted and broken free surface. Laboratory experiments are conducted for both 2-D and 3-D non-linear liquid sloshing in a rectangular tank. A linear analytical solution of 3-D liquid sloshing under the coupled surge and sway excitation is also developed in this study. The numerical model is first validated against the available analytical solution and experimental data for 2-D liquid sloshing of both inviscid and viscous fluids. The validation is further extended to 3-D liquid sloshing. The numerical results match with the analytical solution when the excitation amplitude is small. When the excitation amplitude is large where sloshing becomes highly non-linear, large discrepancies are developed between the numerical results and the analytical solutions, the former of which, however, agree well with the experimental data. Finally, as a demonstration, a violent liquid sloshing with broken free surfaces under six DOF excitations is simulated and discussed.", "keywords": "liquid sloshing;three-dimensional numerical model;navierstokes equations;non-inertial reference frame;vof method;broken free surface;analytical solution", "title": "A numerical study of three-dimensional liquid sloshing in tanks"} {"abstract": "The azimuthal localization of objects by echolocating bats is based on the difference of echo intensity received at the two ears, known as the interaural level difference (ILD). Mimicking the neural circuitry in the bat associated with the computation of ILD, we have constructed a spike-based VLSI model that can produce responses similar to those seen in the lateral superior olive (LSO) and some parts of the inferior colliculus (IC). We further explore some of the interesting computational consequences of the dynamics of both synapses and cellular mechanisms.", "keywords": "echolocation;spike-based;vlsi;azimuthal localization;hardware model;masking", "title": "Spike-based VLSI modeling of the ILD system in the echolocating bat"} {"abstract": "A study was conducted to examine the effect of implementing a new system on its users, specifically, the relationship between pre-implementation expectations and their perceived benefits based on post-implementation experience. Disconfirmation theory was used as the theoretical basis; this predicts that unrealistically high expectations will result in lower levels of perceived benefit than those associated with realistic expectations (i.e. where expectations match experience). Support was found for this prediction, refuting the predictions of dissonance theory. In addition to examining expectations of system use generally, six expectation categories were examined to identify the critical categories where managers should keep expectations from becoming unrealistically high. Significant relationships were found for three expectation categories: system usefulness, ease of use, and information quality. The results indicate that creating and maintaining realistic expectations of future system benefits really does matter.", "keywords": "information systems success;end-user satisfaction;user expectations;disconfirmation theory", "title": "Having expectations of information systems benefits that match received benefits: does it really matter"} {"abstract": "A belief rule base inference methodology using the evidential reasoning approach (RIMER) has been developed recently, where a new belief rule base (BRB) is proposed to extend traditional IF-THEN rules and can capture more complicated causal relationships using different types of information with uncertainties, but these models are trained off-line and it is very expensive to train and re-train them. As such, recursive algorithms have been developed to update the BRB systems online and their calculation speed is very high, which is very important, particularly for the systems that have a high level of real-time requirement. The optimization models and recursive algorithms have been used for pipeline leak detection. However, because the proposed algorithms are both locally optimal and there may exist some noise in the real engineering systems, the trained or updated BRB may violate some certain running patterns that the pipeline leak should follow. These patterns can be determined by human experts according to some basic physical principles and the historical information. Therefore, this paper describes under expert intervention, how the recursive algorithm update the BRB system so that the updated BRB cannot only be used for pipeline leak detection but also satisfy the given patterns. Pipeline operations under different conditions are modeled by a BRB using expert knowledge, which is then updated and fine tuned using the proposed recursive algorithm and pipeline operating data, and validated by testing data. All training and testing data are collected from a real pipeline. The study demonstrates that under expert intervention, the BRB expert system is flexible, can be automatically tuned to represent complicated expert systems, and may be applied widely in engineering. It is also demonstrated that compared with other methods such as fuzzy neural networks (FNNs), the RIMER has a special characteristic of allowing direct intervention of human experts in deciding the internal structure and the parameters of a BRB expert system.", "keywords": "belief rule base;expert system;evidential reasoning;recursive algorithm;leak detection", "title": "Online updating belief rule based system for pipeline leak detection under expert intervention"} {"abstract": "In this paper we develop a syntax-semantics of negative concord in Romanian within a constraint-based lexicalist framework. We show that n-words in Romanian are best treated as negative quantifiers which may combine by resumption to form polyadic negative quantifiers. Optionality of resumption explains the existence of simple sentential negation readings alongside double negation readings. We solve the well-known problem of defining general semantic composition rules for translations of natural language expressions in a logical language with polyadic quantifiers by integrating our higher-order logical object language in Lexical Resource Semantics (LRS), whose constraint-based composition mechanisms directly support a systematic syntax-semantics for negative concord with polyadic quantification in Head-driven Phrase Structure Grammar (HPSG).", "keywords": "negative concord;romanian;polyadic quantifiers;head-driven phrase structure grammar;lexical resource semantics", "title": "Negative concord with polyadic quantifiers"} {"abstract": "Adoption of online recommendation services can improve the quality of decision making or it can pose threats to free choice. When people perceive that their freedom is reduced or threatened by others, they are likely to experience a psychological reactance where they attempt to restore the freedom. We performed an experimental study to determine whether users expectation of personalization increased their intention to use recommendation services, because their perception of expected threat to freedom caused by the recommendations reduced their intention to participate. An analysis based on subjects responses after using a hypothetical shopping website confirmed the two-sided nature of personalized recommendations, suggesting that the approach and avoidance strategies in persuasive communications can be effectively applied to personalized recommendation services on the web. Theoretical and practical implications are discussed.", "keywords": "threat to freedom;psychological reactance;recommendation;personalization;online shopping", "title": "Psychological reactance to online recommendation services"} {"abstract": "We have investigated the sampling efficiency in molecular dynamics with the PB implicit solvent when self-guiding forces are added. Compared with a high-temperature dynamics simulation, the use of self-guiding forces in room-temperature dynamics is found to be rather efficient as measured by potential energy fluctuation, gyration radius fluctuation, backbone RMSD fluctuation, number of unique clusters, and distribution of low RMSD structures over simulation time. Based on the enhanced sampling method, we have performed ab initio folding simulations of two small proteins,1 and villin headpiece. The preliminary data for the folding simulations is presented. It is found that1 folding proceeds by initiation of the turn and the helix. The hydrophobic collapse seems to be lagging behind or at most concurrent with the formation of the helix. The hairpin stability is weaker than the helix in our simulations. Its role in the early folding events seems to be less important than the more stable helix. In contrast, villin headpiece folding proceeds first by hydrophobic collapse. The formation of helices is later than the collapse phase, different from the1 folding.", "keywords": "poissonboltzmann;molecular dynamics;self-guiding forces;protein folding;bba1;villin headpiece", "title": "Enhanced ab initio protein folding simulations in PoissonBoltzmann molecular dynamics with self-guiding forces"} {"abstract": "This paper introduces an implicit high-order Galerkin finite element RungeKutta algorithm for efficient computational investigations of shock structures. The algorithm induces no spatial-discretization artificial diffusion, relies on cubic and higher-degree elements for an accurate resolution of the steep shock gradients, uses an implicit time integration for swift convergence to steady states, and employs original Neumann-type outlet boundary conditions in the form of generalized RankineHugoniot conditions on normal stress and balance of heat flux and deviatoric-stress work per unit time. The formulation automatically calculates the spatial extent of the shock and employs the single non-dimensional (0,1) computational domain for the determination of any shock structure. Since it is implicit, the algorithm rapidly generates steady shock structures, in at most 150 time steps for any upstream Mach number considered in this study. The finite element discretization is shown to be asymptotically convergent under progressive grid refinements, in respect of both the H0 H 0 and H1 H 1 error norms, with an H0 H 0 accuracy order as high as 6 and reduction of the discretization error to the round-off-error threshold of 110?9 with just 420 computational cells and 5th-degree elements. For upstream Mach numbers in the range 1.05?M?10.0, the computational results satisfy the RankineHugoniot conditions and reflect independently published NavierStokes results.", "keywords": "normal shocks;shock-wave structures;computational stability;finite elements;implicit rungekutta", "title": "An implicit Galerkin finite element RungeKutta algorithm for shock-structure investigations"} {"abstract": "Studies of qualitative assessment of organizational processes (e.g., safety audits and performance indicators) and their incorporation into risk models have been based on a normative view that decomposes organizations into separate processes that are likely to fail and lead to accidents. This paper discusses a control theoretic framework of organizational safety that views accidents as a result of performance variability of human behaviors and organizational processes whose complex interactions and coincidences lead to adverse events. Safety-related tasks managed by organizational processes are examined from the perspective of complexity and coupling. This allows safety analysts to look deeper into the complex interactions of organizational processes and how these may remain hidden or migrate toward unsafe boundaries. A taxonomy of variability of organizational processes is proposed and challenges in managing adaptability are discussed. The proposed framework can be used for studying interactions between organizational processes, changes of priorities over time, delays in effects, reinforcing influences, and long-term changes of processes. These dynamic organizational interactions are visualized with the use of system dynamics. The framework can provide a new basis for modeling organizational factors in risk analysis, analyzing accidents and designing safety reporting systems.", "keywords": "organizational safety;systems theory;variability;complexity and coupling;safety management;system dynamics", "title": "A contemporary view of organizational safety: variability and interactions of organizational processes"} {"abstract": "Image-based visual servoing from spherical projection. Decoupled control using invariant features. A near-linear behavior is obtained thanks to the proposed features. The sensitivity to image noise is taken into account.", "keywords": "robust visual servoing;spherical projection", "title": "Robust image-based visual servoing using invariant visual information"} {"abstract": "A distributed algorithm is introduced for the analysis of large continuous time Markov chains (CTMCs) by combining in some sense numerical solution techniques and simulation. CTMCs are described as a set of processes communicating via message passing. The state of a process is described by a probability distribution over a set of reachable states rather than by a single state. Simulation is used to determine event times and messages types to be exchanged, whereas transitions are realized by vector matrix products as in iterative numerical analysis techniques. In this way, the state space explosion of numerical analysis is avoided, but it is still possible to determine more detailed results than with simulation. Parallelization of the algorithm is realized applying a conservative synchronization scheme, which exploits the possibility of precomputing event times as already proposed for parallel simulation of CTMCs. In contrast to a pure simulation approach, the amount of computation is increased, whereas the amount of communication keeps constant. Hence it is possible to achieve even on a workstation cluster a significant speedup.", "keywords": "communication;speedup;distributed algorithms;simulation;parallel simulation;synchronization;event;analysis;computation;probability;timing;combinational;product;vectorization;transit;space;exploit;parallel;message;types;process;message-passing;algorithm;distributed;continuation;iter;scheme;cluster", "title": "a distributed numerical/simulative algorithm for the analysis of large continuous time markov chains"} {"abstract": "One of the major sources of unwanted variation in an industrial process is the raw material quality. However, if the raw materials are sorted into more homogeneous groups before production, each group can be treated differently. In this way the raw materials can be better utilized and the stability of the end product may be improved. Prediction sorting is a methodology for doing this. The procedure is founded on the fuzzy c-means algorithm where the distance in the objective function is based on the predicted end product quality. Usually empirical models such as linear regression are used for predicting the end product quality. By using simulations and bootstrapping, this paper investigates how the uncertainties connected with empirical models affect the optimization of the splitting and the corresponding process variables. The results indicate that the practical consequences of uncertainties in regression coefficients are small. ", "keywords": "raw material variability;sorting;robustness;fuzzy clustering", "title": "Properties of prediction sorting"} {"abstract": "Time lag between subcutaneous interstitial fluid and plasma glucose decreases the accuracy of real-time continuous glucose monitors. However, inverse filters can be designed to correct time lag and attenuate noise enabling the bloodglucose profile to be reconstructed in real time from continuous measurements of the interstitial-fluid glucose. We designed and tested a Wiener filter using a set of 20 sensor-glucose tracings (?30h each) with a 1-min sample interval. Delays of 102min (meanSD) were introduced into each signal with additive Gaussian white noise (SNR=40dB). Performance of the filter was compared to conventional causal and non-causal seventh-order finite-impulse response (FIR) filters. Time lags introduced an error of 5.32.7%. The error increased in the presence of noise (to 5.72.6%) and attempts to remove the noise with conventional low-pass filtering increased the error still further (to 7.03.5%). In contrast, the Wiener filter decreased the error attributed to time delay by ?50% in the presence of noise (from 5.7% to 2.601.26%) and by ?75% in the absence of noise (5.3% to 1.31%). Introducing time-lag correction without increasing sensitivity to noise can increase CGM accuracy.", "keywords": "continuous glucose monitoring;wiener filter;time-lag;interstitial fluid", "title": "Interstitial fluid glucose time-lag correction for real-time continuous glucose monitoring"} {"abstract": "In wireless mobile computing environments, broadcasting is an effective and scalable technique to disseminate information to a massive number of clients, wherein the energy usage and responsiveness are considered major concerns. Existing air indexing schemes for data broadcast have focused on energy efficiency (reducing tuning time) only. On the other hand, existing broadcast scheduling schemes have aimed at reducing access latency through nonflat data broadcast to improve responsiveness only. Not much work has addressed the energy efficiency and responsiveness issues concurrently. In this paper, we propose a fast data access scheme concurrently supporting energy saving protocol that constructs the broadcast channels according to the access frequency of each type of message in order to improve energy efficiency in mobile devices (MDs). The pinwheel scheduling algorithm (PSA) presented in this paper is used to organize all types of messages in the broadcast channel in the most symmetrical distribution in order to reduce both the tuning and access time. The performance of the proposed mechanism is analyzed, and the improvement over existing methods is demonstrated numerically. The results show that the proposed mechanism is capable of improving both the tuning and access time due to the presence of skewness in the access distribution among the disseminated messages. ", "keywords": "mobile computing;wireless broadcast system;energy saving;access time;tuning time", "title": "Fast data access and energy-efficient protocol for wireless data broadcast"} {"abstract": "While the functions of ON and OFF retinal ganglion cells have been intensively investigated, that of ONOFF cells has not. In the present study, the temporal properties of spike trains emitted from ON-OFF cells in response to randomly flickering or multiphase ramp stimuli were examined in the Japanese quail. The results indicate that the firing of ON-spikes was influenced by the recent firing of OFF-spikes, and vice versa. As a result of this interaction, OFF/ON sequence of light intensity change was encoded with a spike pair with an interval of 20ms, indicating that temporal coding is utilized in the vertebrate visual system as early as the retina. Thus, the present results suggest that retinal neuronal circuits may detect specific sequential features of stimuli.", "keywords": "onoff retinal ganglion cells;spike train;stimulus sequence;temporal coding;retina;optic nerve;quail", "title": "ONOFF retinal ganglion cells temporally encode OFF/ON sequence"} {"abstract": "Edge-preserving denoising is of great interest in medical image processing. This paper presents a wavelet-based multiscale products thresholding scheme for noise suppression of magnetic resonance images. A Canny edge detector-like dyadic wavelet transform is employed. This results in the significant features in images evolving with high magnitude across wavelet scales, while noise decays rapidly. To exploit the wavelet interscale dependencies we multiply the adjacent wavelet subbands to enhance edge structures while weakening noise. In the multiscale products, edges can be effectively distinguished from noise. Thereafter, an adaptive threshold is calculated and imposed on the products, instead of on the wavelet coefficients, to identify important features. Experiments show that the proposed scheme better suppresses noise and preserves edges than other wavelet-thresholding denoising methods.", "keywords": "denoising;magnetic resonance image;multiscale products;thresholding;wavelet transform", "title": "Noise reduction for magnetic resonance images via adaptive multiscale products thresholding"} {"abstract": "A model based on semi-fuzzy support vector domain description (semi-fuzzy SVDD) is put forward to address multi-classification problem involved in supplier selection. By preprocessing using semi-fuzzy kernel clustering algorithm, original samples are divided into two subsets: deterministic samples and fuzzy samples. Only the fuzzy samples, rather than all original ones, require expert judgment to decide their categories and are selected as training samples to accomplish SVDD specification. Therefore, the samples preprocessing method can not only decrease experts working strength, but also achieve less computational consumption and better performance of the classifier. Nevertheless, in order to accomplish practical decision making, another condition has to be met: good explanations to the decision. A rule extraction method based on cooperative coevolution algorithm (CCEA), is introduced to achieve the target. To validate the proposed methodology, samples from real world were employed for experiments, with results compared with conventional multi-classification support vector machine approaches and other artificial intelligence techniques. Moreover, in terms of rule extraction, experiments on key parameters, different methods including decompositional and pedagogical ones etc. were also conducted.", "keywords": "supplier selection;semi-fuzzy kernel clustering algorithm;support vector domain description;cooperative coevolution algorithm", "title": "Integration of semi-fuzzy SVDD and CC-Rule method for supplier selection"} {"abstract": "For the mass storage system at DESY, a disk layer is under development. Decoupling the client request queue and access to the mass storage system by means of migration, staging and prefetching it shall provide full utilization of robot and drive resources. By managing distributed disk resources in the heterogeneous computing environment of DESY, optimized data access shall be given. ", "keywords": "mass storage;disk layer;client-server architecture;network file system", "title": "A distributed disk layer for mass storage at DESY"} {"abstract": "In this paper, we investigate the neural network with three-dimensional parameters for applications like 3D image processing, interpretation of 3D transformations, and 3D object motion. A 3D vector represents a point in the 3D space, and an object might be represented with a set of these points. Thus, it is desirable to have a 3D vector-valued neural network, which deals with three signals as one cluster. In such a neural network, 3D signals are flowing through a network and are the unit of learning. This article also deals with a related 3D back-propagation (3D-BP) learning algorithm, which is an extension of conventional back-propagation algorithm in the single dimension. 3D-BP has an inherent ability to learn and generalize the 3D motion. The computational experiments presented in this paper evaluate the performance of considered learning machine in generalization of 3D transformations and 3D pattern recognition.", "keywords": "3d back-propagation algorithm;3d real-valued vector;orthogonal matrix;3d face", "title": "On the learning machine for three dimensional mapping"} {"abstract": "The Horton-Strahler number naturally arose from problems in various fields, e.g., geology, molecular biology and computer science. Consequently, detailed investigations of related parameters for different classes of binary tree structures are of interest. This paper shows one possibility of how to perform a mathematical analysis for parameters related to the Horton-Strahler number in a unified way such that only a single analysis is needed to obtain results for many different classes of trees. The method is explained by the examples of the expected Horton-Strahler number and the related rth moments, the average number of critical nodes, and the expected distance between critical nodes. ", "keywords": "average-case analysis;combinatorial structures;horton-strahler numbers;analytic combinatorics", "title": "A unified approach to the analysis of Horton-Strahler parameters of binary tree structures"} {"abstract": "Market segmentation comprises a wide range of measurement tools that are useful for the sake of supporting marketing and promotional policies also in the sector of cultural economics. This paper aims to contribute to the literature on segmenting cultural visitors by using the Bagged Clustering method, as an alternative and effective strategy to conduct cluster analysis when binary variables are used. The technique is a combination of hierarchical and partitioning methods and presents several advantages with respect to more standard techniques, such as k-means and LVQ. For this purpose, two ad hoc surveys were conducted between June and September 2011 in the two principal museums of the two provinces of the Trentino-South Tyrol region (Bolzano and Trento), Northern Italy: the South Tyrol Museum of Archaeology in Bolzano (TZI), hosting the permanent exhibition of the Iceman tzi, and the Museum of Modern and Contemporaneous Art of Trento and Rovereto (MART). The segmentation analysis was conducted separately for the two kinds of museums in order to find similarities and differences in behaviour patterns and characteristics of visitors. The analysis identified three and two cluster segments respectively for the MART and TZI visitors, where two TZI clusters presented similar characteristics to two out of three MART groups. Conclusions highlight marketing and managerial implications for a better direction of the museums.", "keywords": "bagged clustering;logit models;museum;segmentation;motivation", "title": "Visitors of two types of museums: A segmentation study"} {"abstract": "The frequent and volatile unavailability of volunteer-based Grid computing resources challenges Grid schedulers to make effective job placements. The manner in which host resources become unavailable will have different effects on different jobs, depending on their runtime and their ability to be checkpointed or replicated. A multi-state availability model can help improve scheduling performance by capturing the various ways a resource may be available or unavailable to the Grid. This paper uses a multi-state model and analyzes a machine availability trace in terms of that model. Several prediction techniques then forecast resource transitions into the model's states. We analyze the accuracy of our predictors, which outperform existing approaches. We also propose and study several classes of schedulers that utilize the predictions, and a method for combining scheduling factors. We characterize the inherent tradeoff between job makespan and the number of evictions due to failure, and demonstrate how our schedulers can navigate this tradeoff under various scenarios. Lastly, we propose job replication techniques, which our schedulers utilize to replicate those jobs that are most likely to fail. Our replication strategies outperform others, as measured by improved makespan and fewer redundant operations. In particular, we define a new metric for replication efficiency, and demonstrate that our multi-state availability predictor can provide information that allows our schedulers to be more efficient than others that blindly replicate all jobs or some static percentage of jobs.", "keywords": "multi-state;prediction;availability;characterization;scheduling;replication", "title": "Grid Resource Availability Prediction-Based Scheduling and Task Replication"} {"abstract": "Let G be a simple, undirected graph with vertex set V . For v?V v ? V and r?1 r ? 1 , we denote by BG,r(v) B G , r ( v ) the ball of radius r and centre v . A set C?V C ? V is said to be an r-identifying code in G if the sets BG,r(v)?C B G , r ( v ) ? C , v?V v ? V , are all nonempty and distinct. A graph G which admits an r-identifying code is called r-twin-free or r-identifiable , and in this case the smallest size of an r-identifying code in G is denoted by ? r I D ( G ) . We study the number of different optimal r-identifying codes C , i.e., such that | C | = ? r I D ( G ) , that a graph G can admit, and try to construct graphs having many such codes.", "keywords": "graph theory;twin-free graphs;identifiable graphs;identifying codes", "title": "On the number of optimal identifying codes in a twin-free graph"} {"abstract": "The impact of triangle shapes, including angle sizes and aspect ratios, on accuracy and stiffness is investigated for simulations of highly anisotropic problems. The results indicate that for high-order discretizations, large angles do not have an adverse impact on solution accuracy. However, a correct aspect ratio is critical for accuracy for both linear and high-order discretizations. Large angles are also found to be not problematic for the conditioning of the linear systems arising from the discretizations. Further, when choosing preconditioning strategies, coupling strengths among elements rather than element angle sizes should be taken into account. With an appropriate preconditioner, solutions on meshes with and without large angles can be achieved within a comparable time.", "keywords": "triangle shape;large angle;aspect ratio;high-order finite element;anisotropic problems;ilu-factorization", "title": "On the impact of triangle shapes for boundary layer problems using high-order finite element discretization"} {"abstract": "Tree-walking automata are a natural sequential model for recognizing languages of finite trees. Such automata walk around the tree and may decide in the end to accept it. It is shown that deterministic tree-walking automata are weaker than nondeterministic tree-walking automata. ", "keywords": "tree-walking automata;deterministic tree-walking automata", "title": "Tree-walking automata cannot be determinized"} {"abstract": "Changes and adaptations are always necessary after the deployment of a multi-agent system (MAS), as well as of any other type of software systems. Some of these changes may be simply perfective and have local impact only. However, adaptive changes to meet new situations in the operational environment of the MAS may impact globally on the overall design. More specifically, those changes usually affect the organizational structure of the MAS. In this paper we analyze the issue of design change/adaptation in a MAS organization, and the specific problem of how to properly model/design a MAS so as to make it ready for adaptation. Special attention is paid to the Gaia methodology, whose suitability in dealing with adaptable MAS organizations is also discussed with the help of an illustrative application example.", "keywords": "agent-oriented methodologies;adaptive/adaptable organizations;design for changes;gaia methodology", "title": "ADAPTABLE MULTI-AGENT SYSTEMS: THE CASE OF THE GAIA METHODOLOGY"} {"abstract": "We consider multicriteria allocation problems with linear sum objectives. Despite the fact that the single objective allocation problem is easily solvable, we show that already in the bicriteria case the problem becomes intractable, is NP-hard and has a non-connected efficient set in general. Using the equivalence to appropriately defined multiple criteria multiple-choice knapsack problems, an algorithm is suggested that uses partial dominance conditions to save computational time. Different types of enumeration schemes are discussed, for example, with respect to the number of necessary filtering operations and with regard to possible parallelizations of the procedure.", "keywords": "multicriteria optimization;combinatorial optimization;location-allocation problem", "title": "On the multicriteria allocation problem"} {"abstract": "We present two new algorithms (which supplement Algorithms 1, 2 and 3 presented in part 1) to optimize the tool path of the five-axis numerically controlled milling machine. Algorithm 4 optimizes a set of feasible rotations. Algorithm 5 presents a least-square optimization with regard to a setup of the machine ", "keywords": "nc-programming;cad/cam;optimization;milling machines", "title": "Optimization and correction of the tool path of the five-axis milling machine - Part 2: Rotations and setup"} {"abstract": "In these interesting times computer scientists are increasingly called upon to help concerned citizens understand the risks involved in the current generation of electronic voting machines. These risks and the concurrent escalation of legal challenges to the election system in the United States have shaken the confidence of many Americans that a fair and accurate election is even possible. As computer science educators we have an opportunity to add breadth and depth to our curriculum by using these issues to show how existing concepts can be applied to new problems, and how new problems extend our field. In this paper we identify some of the main problems with e-voting machines and vote-counting technology and suggest ways that discussions of the risks and the attendant societal and ethical issues might be incorporated into the computer science curriculum.", "keywords": "electronic voting", "title": "teaching about the risks of electronic voting technology"} {"abstract": "Background:? Considerable barriers still prevent paediatricians from successfully using information retrieval technology.", "keywords": "decision support;evidence based library and information practice;evidence based practice;evidence-based medicine;health science;health services research;information seeking behaviour;librarians;library and information science;reflective practice", "title": "Effectiveness of bibliographic searches performed by paediatric residents and interns assisted by librarians. A randomised controlled trial"} {"abstract": "Prediction of corporate bankruptcy is a phenomenon of increasing interest to investors/creditors, borrowing firms, and governments alike. Timely identification of firms impending failure is indeed desirable. By this time, several methods have been used for predicting bankruptcy but some of them suffer from underlying shortcomings. In recent years, Genetic Programming (GP) has reached great attention in academic and empirical fields for efficient solving high complex problems. GP is a technique for programming computers by means of natural selection. It is a variant of the genetic algorithm, which is based on the concept of adaptive survival in natural organisms. In this study, we investigated application of GP for bankruptcy prediction modeling. GP was applied to classify 144 bankrupt and non-bankrupt Iranian firms listed in Tehran stock exchange (TSE). Then a multiple discriminant analysis (MDA) was used to benchmarking GP model. Genetic model achieved 94% and 90% accuracy rates in training and holdout samples, respectively; while MDA model achieved only 77% and 73% accuracy rates in training and holdout samples, respectively. McNemar test showed that GP approach outperforms MDA to the problem of corporate bankruptcy prediction.", "keywords": "bankruptcy prediction;financial ratios;genetic programming;multiple discriminant analysis;iranian companies", "title": "A genetic programming model for bankruptcy prediction: Empirical evidence from Iran"} {"abstract": "Classification of items as good or bad can often be achieved more economically by examining the items in groups rather than individually. If the result of a group test is good, all items within it can be classified as good, whereas one or more items are bad in the opposite case. Whether it is necessary to identify the bad items or not, and if so, how, is described by the screening policy. In the course of time, a spectrum of group screening models has been studied, each including some policy. However, the majority ignores that items may arrive at random time epochs at the testing center in real life situations. This dynamic aspect leads to two decision variables: the minimum and maximum group size. In this paper, we analyze a discrete-time batch-service queueing model with a general dependency between the service time of a batch and the number of items within it. We deduce several important quantities, by which the decision variables can be optimized. In addition, we highlight that every possible screening policy can, in principle, be studied, by defining the dependency between the service time of a batch and the number of items within it appropriately.", "keywords": "queueing;group screening policies;dynamic item arrivals", "title": "A queueing model for general group screening policies and dynamic item arrivals"} {"abstract": "Over 80% of web services are vulnerable to attack, and much of the danger arises from command injection vulnerabilities. We present an efficient character-level taint tracking system for Java web applications and argue that it can be used to defend against command injection vulnerabilities. Our approach involves modification only to Java library classes and the implementation of the Java servlets framework, so it requires only a one-time modification to the server without any subsequent modifications to a web application's bytecode or access to the web application's source code. This makes it easy to deploy our technique and easy to secure legacy web software. Our preliminary experiments with the JForum web application suggest that character-level taint tracking adds 0-15% runtime overhead.", "keywords": "information flow;java;dynamic taint tracking;web applications", "title": "efficient character-level taint tracking for java"} {"abstract": "This article explores the subpixel accuracy attainable for the disparity computed from a rectified stereo pair of images with small baseline. In this framework we consider translations as the local deformation model between patches in the images. A mathematical study first shows how discrete block-matching can be performed with arbitrary precision under Shannon-Whittaker conditions. This study leads to the specification of a block-matching algorithm which is able to refine disparities with subpixel accuracy. Moreover, a formula for the variance of the disparity error caused by the noise is introduced and proved. Several simulated and real experiments show a decent agreement between this theoretical error variance and the observed root mean squared error in stereo pairs with good signal-to-noise ratio and low baseline. A practical consequence is that under realistic sampling and noise conditions in optical imaging, the disparity map in stereo-rectified images can be computed for the majority of pixels (but only for those pixels with meaningful matches) with a 1/20 pixel precision.", "keywords": "block-matching;subpixel accuracy;noise error estimate", "title": "How Accurate Can Block Matches Be in Stereo Vision"} {"abstract": "In this article, we present a unified perspective on the cognitive internet of things (CIoT). It is noted that within the CIoT design we observe the convergence of energy harvesting, cognitive spectrum access and mobile cloud computing technologies. We unify these distinct technologies into a CIoT architecture which provides a flexible, dynamic, scalable and robust network design road-map for large scale IoT deployment. Since the prime objective of the CIoT network is to ensure connectivity between things, we identify key metrics which characterize the network design space. We revisit the definition of cognition in the context of IoT networks and argue that both the energy efficiency and the spectrum efficiency are key design constraints. To this end, we define a new performance metric called the overall link success probability which encapsulates these constraints. The overall link success probability is characterized by both the self-sustainablitiy of the link through energy harvesting and the availability of spectrum for transmissions. With the help of a reference scenario, we demonstrate that well-known tools from stochastic geometry can be employed to investigate both the node and the network level performance. In particular, the reference scenario considers a large scale deployment of a CIoT network empowered by solar energy harvesting deployed along with the centralized CIoT device coordinators. It is assumed that CIoT network is underlaid with a cellular network, i.e., CIoT nodes share spectrum with mobile users subject to a certain co-existence constraint. Considering the dynamics of both energy harvesting and spectrum sharing, the overall link success probability is then quantified. It is shown that both the self-sustainability of the link, and the availability of transmission opportunites, are coupled through a common parameter, i.e., the node level transmit power. Furthermore, provided the co-existence constraint is satisfied, the link level success in the presence of both the inter-network and intra-network interference is an increasing function of the transmit power. We demonstrate that the overall link level success probability can be maximized by employing a certain optimal transmit power. Characterization of such an optimal operational point is presented. Finally, we highlight some of the future directions which can benefit from the analytical framework developed in this paper.", "keywords": "internet-of-things;cognitive radios;solar energy harvesting;stochastic cloud cover;shared spectrum;underlay;interference", "title": "The Cognitive Internet of Things: A Unified Perspective"} {"abstract": "We prove that constant depth circuits, with one layer of MODm gates at the inputs, followed by a fixed number of layers of MODp gates, where p is prime, require exponential size to compute the MODq function, if q is a prime that divides neither p nor m.", "keywords": "circuit complexity;modular counting", "title": "Lower bounds for modular counting by circuits with modular gates"} {"abstract": "Any account of computation in a physical system, whether an artificial computing device or a natural system considered from a computational point of view, invokes some notion of the relationship between the abstract-logical and concrete-physical aspects of computation. In a recent paper, James Ladyman explored this relationship using a \"hybrid physical-logical entity\" - the L-machine - and the general account of computation that it supports [J. Ladyman, What does it mean to say that a physical system implements a computation?, Theoretical Computer Science 410 (2009) 376-383]. The underlying L-machine of Ladyman's analysis is, however, classical and highly idealized, and cannot capture essential aspects of computation in important classes of physical systems (e.g. emerging nanocomputing devices) where logical states do not have faithful physical representations and where noise and quantum effects prevail. In this work we generalize the L-machine to allow for generally unfaithful and noisy implementations of classical logical transformations in quantum mechanical systems. We provide a formal definition and physical-information-theoretic characterization of generalized quantum L-machines (QLMs), identify important classes of QLMs, and introduce new efficacy measures that quantify the faithfulness and fidelity with which logical transformations are implemented by these machines. Two fundamental issues emphasized by Ladyman - realism about computation and the connection between logical and physical irreversibility - are reconsidered within the more comprehensive account of computation that follows from our generalization of the L-machine. ", "keywords": "implementation;l-machine;landauer's principle;physical information theory;physics of computation;realism about computation;representation", "title": "On the physical implementation of logical transformations: Generalized L-machines"} {"abstract": "Manipulativesphysical learning materials such as cubes or tilesare prevalent in educational settings across cultures and have generated substantial research into how actions with physical objects may support childrens learning. The ability to integrate digital technology into physical objectsso-called digital manipulativeshas generated excitement over the potential to create new educational materials. However, without a clear understanding of how actions with physical materials lead to learning, it is difficult to evaluate or inform designs in this area. This paper is intended to contribute to the development of effective tangible technologies for childrens learning by summarising key debates about the representational advantages of manipulatives under two key headings: offloading cognitionwhere manipulatives may help children by freeing up valuable cognitive resources during problem solving, and conceptual metaphorswhere perceptual information or actions with objects have a structural correspondence with more symbolic concepts. The review also indicates possible limitations of physical objectsmost importantly that their symbolic significance is only granted by the context in which they are used. These arguments are then discussed in light of tangible designs drawing upon the authors current research into tangibles and young childrens understanding of number.", "keywords": "tangible technologies;physical manipulatives;mathematics learning;educational technology;virtual manipulatives", "title": "Tangibles for learning: a representational analysis of physical manipulation"} {"abstract": "With the rapid development of high-throughput experiment techniques for protein-protein interaction (PPI) detection, a large amount of PPI network data are becoming available. However, the data produced by these techniques have high levels of spurious and missing interactions. This study assigns a new reliably indication for each protein pairs via the new generative network model (RIGNM) where the scale-free property of the PPI network is considered to reliably identify both spurious and missing interactions in the observed high-throughput PPI network. The experimental results show that the RIGNM is more effective and interpretable than the compared methods, which demonstrate that this approach has the potential to better describe the PPI networks and drive new discoveries.", "keywords": "protein-protein interaction network;generative network model;ppi data denoising", "title": "Identifying Spurious Interactions and Predicting Missing Interactions in the Protein-Protein Interaction Networks via a Generative Network Model"} {"abstract": "Generalisation of the foundational basis for many-valued logic programming builds upon generalised terms in the form of powersets of terms. A categorical approach involving set and term functors as monads allows for a study of monad compositions that provide variable substitutions and compositions thereof. In this paper, substitutions and unifiers appear as constructs in Kleisli categories related to particular composed powerset term monads. Specifically, we show that a frequently used similarity-based approach to fuzzy unification is compatible with the categorical approach, and can be adequately extended in this setting; also some examples are included in order to illuminate the definitions. ", "keywords": "similarities;fuzzy unification;category theory and unification;generalised terms", "title": "Similarities between powersets of terms"} {"abstract": "A feedback vertex set (FVS) in a graph is a subset of vertices whose complement induces a forest. Finding a minimum FVS is N P-complete on bipartite graphs, but tractable on convex bipartite graphs and on chordal bipartite graphs. A bipartite graph is called tree convex, if a tree is defined on one part of the vertices, such that for every vertex in the other part, its neighborhood induces a subtree. When the tree is a path, a triad or a star, the bipartite graph is called convex bipartite, triad convex bipartite or star convex bipartite, respectively. We show that: (I) FVS is tractable on triad convex bipartite graphs; (2) FVS is N P-complete on star convex bipartite graphs and on tree convex bipartite graphs where the maximum degree of vertices on the tree is at most three. ", "keywords": "feedback vertex set;tree convex bipartite;polynomial time;n p-complete", "title": "Feedback vertex sets on restricted bipartite graphs"} {"abstract": "The extended Delaunay tessellation (EDT) is presented in this paper as the unique partition of a node set into polyhedral regions defined by nodes lying on the nearby Voronoi spheres. Until recently, all the FEM mesh generators were limited to the generation of tetrahedral or hexahedral elements (or triangular and quadrangular in 2D problems). The reason for this limitation was the lack of any acceptable shape function to be used in other kind of geometrical elements. Nowadays, there are several acceptable shape functions for a very large class of polyhedra. These new shape functions, together with the EDT, gives an optimal combination and a powerful tool to solve a large variety of physical problems by numerical methods. The domain partition into polyhedra presented here does not introduce any new node nor change any node position. This makes this Process suitable for Lagrangian problems and meshless methods in which only the connectivity information is used and there is no need for any expensive smoothing process.", "keywords": "mesh generation;delaunayl/voronoi tessellations", "title": "The extended Delaunany tessellation"} {"abstract": "In deterministic as well as stochastic models, stiff systems, i.e., systems with vastly different time scales where the fast scales are stable, are very common. It is well known that the implicit Euler method is well suited for stiff deterministic equations (modeled by ODEs) while the explicit Euler is not. In particular, once the fast transients are over, the implicit Euler allows for the choice of time steps comparable to the slowest time scales of the system. In stochastic systems (modeled by SDEs) the picture is more complex. While the implicit Euler has better stability properties over the explicit Euler, it underestimates the stationary variance. In general, one may not expect any method to work successfully by taking time steps of the order of the slowest time scale. We explore the idea of interlacing large implicit Euler steps with a sequence of small explicit Euler steps. In particular, we present our study of a linear test system of SDEs and demonstrate that such interlacing could effectively deal with stiffness. We also discuss the uniform convergence of mean and variance.", "keywords": "explicit euler method;stochastic differential equations;implicit euler method;uniform convergence;stiffness", "title": "interlaced euler scheme for stiff systems of stochastic differential equations"} {"abstract": "This paper is concerned with the inverse medium scattering problem in a perturbed, layered, half-space, which is a problem related to the seismologial investigation of inclusions inside the earth's crust. A wave penetrable object is located in a layer where the refraction index is different from the other part of the half-space. Wave propagation in such a layered half-space is different from that in a homogeneous half-space. In a layered half-space, a scattered wave consists of a free wave and a guided wave. In many cases, only the free-wave far-field or only the guided-wave far-field can be measured. We establish mathematical formulas for relations between the object, the incident wave and the scattered wave. In the ideal condition where exact data are given, we prove the uniqueness of the inverse problem. A numerical example is presented for the reconstruction of a penetrable object from simulated noise data. ", "keywords": "inverse problems", "title": "Inverse problem for wave propagation in a perturbed layered half-space"} {"abstract": "This paper presents an analysis of user studies from a review of papers describing new visualisation applications and uses these to highlight various issues related to the evaluation of visualisations. We first consider some of the reasons why the process of evaluating visualisations is so difficult. We then dissect the problem by discussing the importance of recognising the nature of experimental design, datasets and participants as well as the statistical analysis of results. We propose explorative evaluation as a method of discovering new things about visualisation techniques, which may give us a better understanding of the mechanisms of visualisations. Finally we give some practical guidance on how to do evaluation correctly.", "keywords": "explorative evaluation;case study;evaluation;information visualisation", "title": "an explorative analysis of user evaluation studies in information visualisation"} {"abstract": "Management information system (MIS) students are one of the most important information system (IS) employee sources. However, the determinants of student's burnout for MIS major students have received little attentions, despite their importance as indicator in predicting professional burnout and their working intention after their graduation and becoming IS professionals. This study explores the antecedents of student burnout for MIS major at technical-vocational college. Self-efficacy, social support, and sex-role were considered as antecedents to MIS student burnout. A questionnaire method by self-administered technique was used in this study. Multiple regression analysis was used to analyze the hypotheses. Statistical results displayed that MIS students with social support, self-efficacy and femininity have predictive power over student burnout. MIS students with social support and masculinity also have predictive power over self-efficacy.", "keywords": "technical-vocational education;student burnout;self-efficacy;social support;sex-role", "title": "An investigation the factors affecting MIS student burnout in technical-vocational college"} {"abstract": "This paper presents a numerical method for modelling the dynamic thermal behaviour of microelectronic structures in the frequency domain. A boundary element method (BEM) based on a Green's function solution is proposed for solving the 3D heat equation in phasor notation. The method is capable of calculating the AC temperature and heat flux distributions and complex thermal impedance for packages composed of an arbitrary number of bar-shaped components. Various types of boundary conditions, including thermal contact resistance and convective cooling, can be taken into account. A simple benchmark case is investigated and a good convergence towards the analytical solution is obtained. Simulation results for a thin plate under convective cooling are compared with a theoretical model and an excellent agreement is observed. In a second example a more complicated three-layer structure is investigated. The BEM is used to analyse the thermal behaviour if delamination of the package occurs, and a physical explanation for the results is given.", "keywords": "thermal impedance;microelectronics;boundary element method;green's function;nyquist plot;heat transfer;phasor notation", "title": "BEM calculation of the complex thermal impedance of microelectronic devices"} {"abstract": "Texture transfer is a method that copies the texture of a reference image to a target image. This technique has an advantage in that various styles can be expressed according to the reference image, in a single framework. However, in this technique, it is not easy to control the effect of each style. In addition, when this technique is extended to processing video images, maintaining temporal coherence is very difficult. In this paper, we propose an algorithm that transfers the texture of a reference image to a target video while retaining the directionality of the target video. The algorithm maintains the temporal coherency of the transferred texture, and controls the style of the texture transfer.", "keywords": "texture transfer;temporal coherence;example-based rendering;video processing", "title": "Directional texture transfer for video"} {"abstract": "High altitude wind parameters can be extracted from recorded aircraft positions. This method gives the best results when aircraft are stabilized (no turn, no climb, no descend) This method can extract wind dynamics, e.g. how wind parameters change over time. Our method performs well thanks to a mix of automatic wind extraction and direct manipulation technique.", "keywords": "wind extraction;least squares approximation;air traffic control;data exploration;visual analytics", "title": "Wind parameters extraction from aircraft trajectories"} {"abstract": "The stratum corneum, the outer layer of the epidermis, serves as a protective barrier to isolate the skin from the external environment. Keratinocyte transglutaminase 1 (TGase 1) catalyzes amide crosslinking between glutamine and lysine residues on precursor proteins forming the impermeable layers of the epidermal cell envelopes (CE), the highly insoluble membranous structures of the stratum corneum. Patients with the autosomal recessive skin disorder lamellar ichthyosis (LI) appear to have deficient cross-linking of the cell envelope due to mutations identified in TGase 1, linking this enzyme to LI. In the absence of a crystal structure, molecular modeling was used to generate the structure of TGase 1. We have mapped the known mutations of TGase 1 from our survey obtained from a search of PubMed and successfully predicted the impact of these mutations on LI. Furthermore, we have identified Ca(2+) binding sites and propose that Ca(2+) induces a cis to trans isomerization in residues near the active site as part of the enzyme transamidation activation. Docking experiments suggest that substrate binding subsequently induces the reverse cis to trans isomerization, which may be a significant part of the catalytic process. These results give an interpretation at the molecular level of previously reported mutations and lead to further insights into the structural model of TGase 1, providing a new basis for understanding LI.", "keywords": "keratinocyte transglutaminase 1;lamellar ichthyosis;mutations;metal ions;isomerization;molecular modeling", "title": "A three-dimensional model of the human transglutaminase 1: insights into the understanding of lamellar ichthyosis"} {"abstract": "Purpose - The purpose of this research is to draw on both perspectives of technological perceptions and flow experience to examine continuance usage of mobile sites. Design/methodology/approach - Based on the valid responses collected from a. survey questionnaire, structural equation modeling technology was employed to examine the research model. Findings - The results indicated that both perspectives of technological perceptions and flow experience have effects on satisfaction, which in turn affects continuance usage. Technological perceptions include system quality and information quality, whereas flow experience includes perceived enjoyment and attention focus. Among them, perceived enjoyment has the largest effect on satisfaction. Research limitations/implications - This research is conducted in China, where mobile internet is still in its early stage. Thus, the results need to be generalized to other countries that had developed mobile internet. Originality/value - Previous research has focused on the effects of instrumental beliefs such as perceived usefulness on mobile user continuance. However, user behavior may be also affected by intrinsic motivations such as flow. This research tries to fill the gap.", "keywords": "information quality;system quality;mobile sites", "title": "Understanding continuance usage of mobile sites"} {"abstract": "We present an accurate model and procedures for predicting the common physical design characteristics of standard cell layouts (i.e., the interconnection length and the chip area). The predicted results are obtained from analysis of the net list only, that is, no prior knowledge of the functionality of the design is used, Random and optimized placements, global routing, and detailed routing are each abstracted by procedural models that capture the important features of these processes, and closed-form expressions that define these procedural models are presented. We have verified both the global characteristics (total interconnection length and layout area) and the detailed characteristics (wire length and feedthrough distributions) of the model, On the designs in our test suite, the estimates are very close to the actual layouts.", "keywords": "global route modeling;interconnection length estimation;layout area estimation;placement modeling;standard cell layout", "title": "Interconnection analysis for standard cell layouts"} {"abstract": "Linear Independent Components Analysis (ICA) has become an important signal processing and data analysis technique, the typical application being blind source separation in a wide range of signals, such as biomedical, acoustical and astrophysical ones. Nonlinear ICA is less developed, but has the potential to become at least as powerful. This paper presents MISEP, an ICA technique for linear and nonlinear mixtures, which is based on the minimization of the mutual information of the estimated components. MISEP is a generalization of the popular INFOMAX technique, which is extended in two ways: (1) to deal with nonlinear mixtures, and (2) to be able to adapt to the actual statistical distributions of the sources, by dynamically estimating the nonlinearities to be used at the outputs. The resulting MISEP method optimizes a network with a specialized architecture, with a single objective function: the output entropy. The paper also briefly discusses the issue of nonlinear source separation. Examples of linear and nonlinear source separation performed by MISEP are presented.", "keywords": "ica;blind source separation;nonlinear ica;mutual information", "title": "MISEP - Linear and nonlinear ICA based on mutual information"} {"abstract": "Reviews experiments in design and urbanism, intervening in the development of transdisciplinary systems theory for decision-making organizations. Presents beyond state-of-the-art phenomena, of a morphological and topological type (out of architecture), and advocates harnessing such creativity power to problem solving in informatics.", "keywords": "architecture;cybernetics;perception", "title": "Beyond state-of-the-art topology as normative ground for decison-making systems"} {"abstract": "Physical system modelling with known parameters together with 2-D or high order look-up tables (obtained from experimental data), have been the preferred method for simulating electric vehicles. The non-linear phenomena which are present at the vehicle tyre patch and ground interface have resulted in it quantitative understanding of this phenomena. However, nowadays, there is it requirement for a deeper understanding of the vehicle sub-models which previously used look-up tables. In this paper the hybrid modelling methodology used for electric vehicle systems offers a two-stage advantage: firstly, the vehicle model retains a comprehensive analytical formulation and secondly, the 'fuzzy' element offers, in addition to the quantitative results, a qualitative understanding of specific vehicle sub-models. In the literature several hybrid topologies are reported, sequential, auxiliary, and embedded. In this paper, the hybrid model topology selected is auxiliary and within the same hybrid model, the first paradigm used is the vehicle dynamics together With the actuator/gearbox system. The second paradigm is the non-linear fuzzy tyre model for each wheel. In particular, conventional physical system dynamic modelling has been combined with the fuzzy logic type-II or type-III methodology. The resulting hybrid-fuzzy tyre models were estimated for a-priori number of rules from experimental data. The physical system modelling required the available vehicle parameters such as the overall mass, wheel radius and chassis dimensions. The suggested synergetic fusion of the two methods, (hybrid-fuzzy), allowed the vehicle planar trajectories to be obtained prior to the hardware development of the entire vehicle. The strength of this methodology is that it requires localised system experimental data rather than global system data. The disadvantage in obtaining global experimental data is the requirement for comprehensive testing of it vehicle prototype which is both time consuming process and requires extensive resources. In this paper the authors have proposed the use of existing experimental rigs which are available from the leading automotive manufacturers. Hence, for the 'hybrid' modelling, localised data sets were used. In particular, wheel-tyre experimental data were obtained from the University Tyre Rig experimental facilities. Tyre forces acting on the tyre patch are mainly responsible for the overall electric vehicle motion. In addition, tyre measurement rigs are a well known method for obtaining localised data thus allowing the effective simulation of more detailed mathematical models. These include, firstly, physical system modelling (conventional vehicle dynamics), secondly, fuzzy type II or III modelling (for the tyre characteristics), and thirdly, electric drive modelling within the context of electric vehicles. The proposed hybrid model synthesis has resulted in simulation results which are similar to piece-wise 'look-up' table solutions. In addition, the strength of the 'hybrid' synthesis is that the analyst has a set of rules which clearly show the reasoning behind the complex development of the vehicle tyre forces. This is due to the inherent transparency of the type II and type III methodologies. Finally, the authors discussed the reasons for selecting a type-III framework. The paper concludes with a plethora of simulation results. ", "keywords": "hybrid model synthesis-;type-ii and type-iii fuzzy systems;parameter estimation;electric drives", "title": "Fuzzy-hybrid modelling of an Ackerman steered electric vehicle"} {"abstract": "This article presents a non-invasive speech processing method for the assessment and evaluation of voice hoarseness. A technique based on time-scale analysis of the voice signal is used to decompose the signal into a suitable number of high-frequency details and extract the high-frequency bands of the signal. A discriminating measure, which measures the roll-off in power in the high-frequency bands of the signal, with respect to the decomposition index, is developed. The measure reflects the presence and degree of severity of hoarseness in the analyzed voice signals. The discriminating measure is supported by frequency-domain and time-series analyses of the high-frequency bands of normal and hoarse voice signals to provide a visual aid to the clinician or therapist. A database of sustained long vowels of normal and hoarse voices is created and used to assess the presence and degree of severity of hoarseness. The results obtained by the proposed method are compared to results obtained by perturbation analysis.", "keywords": "voice hoarseness;speech pathology;speech analysis;time-scale and time-series analysis", "title": "On the assessment and evaluation of voice hoarseness"} {"abstract": "We develop a method based on diffusion approximations in order to compute, under some general conditions, the queue length distribution for a queue in a network. Applications to computer networks and to time-sharing systems are presented.", "keywords": "sharing;network;order;applications;method;systems;diffuse;computer network;computation;general;timing;model;distributed", "title": "probabilistic models of computer systems"} {"abstract": "The compositional representation of a Markov chain using Kronecker algebra, according to a compositional model representation as a superposed generalized stochastic Petri net or a stochastic automata network, has been studied for a while. In this paper we describe a Kronecker expression and associated data structures, that allows to handle nets with synchronization over activities of different levels of priority. New algorithms for these structures are provided to perform an iterative solution method of Jacobi or GaussSeidel type. These algorithms are implemented in the APNN Toolbox. We use this implementation in combination with GreatSPN and exercise an example that illustrates characteristics of the presented algorithms.", "keywords": "stochastic petri nets;performance evaluation tools;numerical algorithms", "title": "Integrating synchronization with priority into a Kronecker representation"} {"abstract": "We propose and analyze a class of penalty-function-free nonmonotone trust-region methods for nonlinear equality constrained optimization problems. The algorithmic framework yields global convergence without using a merit function and allows nonmonotonicity independently for both, the constraint violation and the value of the Lagrangian function. Similar to the Byrd-Omojokun class of algorithms, each step is composed of a quasi-normal and a tangential step. Both steps are required to satisfy a decrease condition for their respective trust-region subproblems. The proposed mechanism for accepting steps combines nonmonotone decrease conditions on the constraint violation and/or the Lagrangian function, which leads to a flexibility and acceptance behavior comparable to filter-based methods. We establish the global convergence of the method. Furthermore, transition to quadratic local convergence is proved. Numerical tests are presented that confirm the robustness and efficiency of the approach.", "keywords": "nonmonotone trust-region methods;sequential quadratic programming;penalty function;global convergence;equality constraints;local convergence;large-scale optimization", "title": "Non-monotone trust region methods for nonlinear equality constrained optimization without a penalty function"} {"abstract": "A new formulation is presented for numerically computing the helical Chandrasekhar-Kendall modes in an axisymmetric torus. It explicitly imposes del . B = 0 and yields a standard matrix eigenvalue problem, which can then be solved by standard matrix eigenvalue techniques. Numerical implementation and computational results are shown for an axisymmetric torus typical of reversed field pinch and spherical tokamak. ", "keywords": "magnetic relaxation;taylor state;chandrasekhar-kendall modes;eigenvalue;spherical tokamak;reversed field pinch;helicity injection", "title": "Numerical computation of the helical Chandrasekhar-Kendall modes"} {"abstract": "This paper presents the results of the Adaptive-Network Based Fuzzy Inference System (ANFIS) for the prediction of path loss in a specific urban environment. A new algorithm based ANFIS for tuning the path loss model is introduced in this work. The performance of the path loss model which is obtained from proposed algorithm is compared to the Bertoni-Walfisch model, which is one of the best studied for propagation analysis involving buildings. This comparison is based on the mean square error between predicted and measured values. According to the indicated error criterion, the errors related to the predictions that are obtained from the algorithm are less than the errors that are obtained from the Bertoni-Walfisch Model. In this study, propagation measurements were carried out in the 900 MHz band in the city of Istanbul, Turkey.", "keywords": "anfis;propagation measurements;path loss;urban environment", "title": "Fuzzy adaptive neural network approach to path loss prediction in urban areas at GSM-900 band"} {"abstract": "Topology optimization can be seen as optimizing a distribution of small topological elements within a domain with respect to given specifications. A numerical topology gradient (TG) algorithm is applied in the context of electromagnetism for optimizing microwave devices, computing the sensitivity on adding or removing small metallic elements. This method leads to an optimum topology with very little initial information in acceptable time consumption. The method is applied to the design of a microstrip component in which the topology gradient is directly used as a direction of descent. However, in some ill-behavior problems, topology gradient is not sufficient to converge to the global optimum. In the latter case, the basic TG is coupled with a genetic algorithm (G.A) to make a more suitable algorithm for solving local optima problems. ", "keywords": "shape optimization;topology gradient;genetic algorithm", "title": "Optimization of microwave devices combining topology gradient and genetic algorithm"} {"abstract": "We present an algorithm for the layout of undirected compound graphs, relaxing restrictions of previously known algorithms in regards to topology and geometry. The algorithm is based on the traditional force-directed layout scheme with extensions to handle multi-level nesting, edges between nodes of arbitrary nesting levels, varying node sizes, and other possible application-specific constraints. Experimental results show that the execution time and quality of the produced drawings with respect to commonly accepted layout criteria are quite satisfactory. The algorithm has also been successfully implemented as part of a pathway integration and analysis toolkit named PATIKA, for drawing complicated biological pathways with compartmental constraints and arbitrary nesting relations to represent molecular complexes and various types of pathway abstractions. ", "keywords": "information visualization;graph drawing;force-directed graph layout;compound graphs;bioinformatics", "title": "A layout algorithm for undirected compound graphs"} {"abstract": "A knowledge-based reactive scheduling system is proposed to answer the requirements of Emergency Departments (EDs). The algorithm includes detailed patient priority, arrival time, flow time and doctor load. The main aim is to determine the patients who have higher priorities initially, and then minimize their waiting times. To achieve this aim, physicians and the other related workers can use an interactive system. In this study, we evaluated the existing system by comparing the proposed system. Also, reactive scheduling cases were evaluated for some items such as decreasing the number of doctors, changing durations and entering of an urgent patient to the system. All experiments were performed with proposed algorithm and right shift rescheduling approach.", "keywords": "knowledge-based system;reactive scheduling;emergency department;health care system;patient priorities", "title": "A knowledge-based scheduling system for Emergency Departments"} {"abstract": "Timed I/O automata (TIOA) is a mathematical framework for modeling and verification of distributed systems that involve discrete and continuous dynamics. TIOA can be used for example, to model a real-time software component controlling a physical process. The TIOA model is sufficiently general to subsume other models in use for timed systems. The Tempo Toolset, currently under development, is aimed at supporting system development based on TIOA specifications. The Tempo Toolset is an extension of the IOA toolkit, which provides a specification simulator, a code generator, and both model checking and theorem proving support for analyzing specifications. This paper focuses on the modeling of timed systems and their properties with TIOA and on the use of TAME4TIOA, the TAME (Timed Automata Modeling Environment) based theorem proving support provided in Tempo, for proving system properties, including timing properties. Several examples are provided by way of illustration.", "keywords": "system development frameworks;modeling environments;tool suites;automata models;timed automata;hybrid systems;formal methods;specification;verification;theorem proving", "title": "Specifying and proving properties of timed I/O automata using Tempo"} {"abstract": "In this paper, we study the problem of distributed virtual backbone construction in sensor networks, where the coverage area of nodes are disks with different radii. This problem is modeled by the construction of a minimum connected dominating set (MCDS) in geometric k-disk graphs. We derive the size relationship of any maximal independent set (MIS) and MCDS in geometric k-disk graphs, and apply it to analyze the performances of two distributed connected dominating set (CDS) algorithms we propose in this paper. These algorithms have bounded performance ratio and low communication overhead. To the best of our knowledge, the results reported in this paper represent the state-of-the-art. ", "keywords": "sensor networks with asymmetric transmission links;connected dominating set;geometric k-disk graphs;maximal independent set", "title": "Distributed virtual backbone construction in sensor networks with asymmetric links"} {"abstract": "Chip-multiprocessors are an emerging trend for embedded systems. In this article, we introduce a real-time Java multiprocessor called JopCMP. It is a symmetric shared-memory multiprocessor, and consists of up to eight Java Optimized Processor (JOP) cores, an arbitration control device, and a shared memory. All components are interconnected via a system on chip bus. The arbiter synchronizes the access of multiple CPUs to the shared main memory. In this article, three different arbitration policies are presented, evaluated, and compared with respect to their real-time and average-case performance: a fixed priority, a fair-based, and a time-sliced arbiter. Tasks running on different CPUs of a chip-multiprocessor (CMP) influence each others' execution times when accessing a shared memory. Therefore, the system needs an arbiter that is able to limit the worst-case execution time of a task running on a CPU, even though tasks executing simultaneously on other CPUs access the main memory. Our research shows that timing analysis is in fact possible for homogeneous multiprocessor systems with a shared memory. The timing analysis of tasks, executing on the CMP using time-sliced memory arbitration, leads to viable worst-case execution time bounds. The time-sliced arbiter divides the memory access time into equal time slots, one time slot for each CPU. This memory arbitration scheme allows for a calculation of upper bounds of Java application worst-case execution times, depending on the number of CPUs, the time slot size, and the memory access time. Examples of worst-case execution time calculation are presented, and the analyzed results of a real-world application task are compared to measured execution time results. Finally, we evaluate the tradeoffs when using a time-predictable solution compared to using average-case optimized chip-multiprocessors, applying three different benchmarks. These experiments are carried out by executing the programs on the CMP prototype.", "keywords": "design;experimentation;measurement;performance;real-time system;multiprocessor;java processor;shared memory;worst-case execution time", "title": "A Real-Time Java Chip-Multiprocessor"} {"abstract": "Among existing kinematic analysis methods of mechanisms, the techniques based on finite elements represent a generally applicable alternative which enable a wide variety of problems to be solved, including linear (velocities, accelerations, jerk, ) and non-linear ones (position). To modelize a mechanism via these techniques, the link element may be used to introduce a distance constraint between two points. The stiffness matrix assembly of these link elements enables stiffness matrix construction from the model, from which the kinematic behaviour of the mechanism may be extracted. Normally kinematic link conditions introduced directly into the system stiffness matrix are used to introduce point to line constraints like those originated by prismatic joints. A new finite element is presented in this paper, defined by its stiffness or geometric matrix, capable of alternatively modelizing the constraints imposed by the prismatic joint. This new element offers numerous advantages against the procedure based on anterior link conditions, particularly in the case of non-linear problems.", "keywords": "finite elements;mechanism kinematics;linear problems;prismatic joint;velocity;acceleration", "title": "A new finite element to represent prismatic joint constraints in mechanisms"} {"abstract": "In this paper, a novel parametric and global image histogram thresholding method is presented. It is based on the estimation of the statistical parameters of object and background classes by the expectationmaximization (EM) algorithm, under the assumption that these two classes follow a generalized Gaussian (GG) distribution. The adoption of such a statistical model as an alternative to the more common Gaussian model is motivated by its attractive capability to approximate a broad variety of statistical behaviors with a small number of parameters. Since the quality of the solution provided by the iterative EM algorithm is strongly affected by initial conditions (which, if inappropriately set, may lead to unreliable estimation), a robust initialization strategy based on genetic algorithms (GAs) is proposed. Experimental results obtained on simulated and real images confirm the effectiveness of the proposed method.", "keywords": "image thresholding;expectationmaximization algorithm;generalized gaussian distribution;genetic algorithms", "title": "Image thresholding based on the EM algorithm and the generalized Gaussian distribution"} {"abstract": "In this paper, we propose a fast fractal image coding based on LMSE (least mean square error) analysis and subblock feature. The proposed method focuses on efficient search of contrast scaling, position of its matched domain block, and isometric transform for a range block. The contrast scaling and the domain block position are searched using a cost function that comes from the LMSE analysis of the range block and its fractal-approximated block. The isometric transform is searched using 2 x 2 blocks formed with the averages of subblocks of range block and domain block. Experimental results show that the encoding time of a conventional fractal image coding with our search method is 25.6-39.7 times faster than that with full search method at the same bit rate while giving PSNR decrement of 0.2-0.7 dB with negligible deterioration in subjective quality. It is also shown that the encoding time of a conventional fractal image coding with our search method is 3.4-4.2 times faster than Jacquin's fractal image coding and is superior by maximum 0.8 dB in PSNR. It also yields reconstructed images of better quality.", "keywords": "image coding;fractal image coding;lmse;contractive mapping", "title": "Fast fractal image coding based on LMSE analysis and subblock feature"} {"abstract": "This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure Coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements. as well as previous modelling results available in literature. ", "keywords": "fluid turbulence;direct numerical simulation ;finite element method ;open-channel flow;passive scalar plume", "title": "Finite element solution to passive scalar transport behind line sources under neutral and unstable stratification"} {"abstract": "The optimal topology of a Michell's truss is being considered as a benchmark problem. It has been observed that this optimal topology is only applicable up to a particular ratio of distance between the loading point to the line joining the supports and the span of the supports. Once the ratio exceeds this critical ratio, the optimum topology of the Michell's truss changes. It has been observed from the studies that it is possible to demarcate the region of two different types of optimum topologies by a linear relation. Extending this problem to a 3-D, similar type of observation of different optimum topologies has been observed above and below the critical ratio of height to radius ratio. This critical ratio, similar to a 2-D case, follows almost a linear relation.", "keywords": "genetic algorithms;optimization;2-d truss;3-d truss;structure;topology", "title": "The effect of radius/height ratio on truss optimization"} {"abstract": "The need to provide effective mental health treatments for adolescents has been described as a global public health challenge [27]. In this paper we discuss the exploratory evaluations of the first adolescent intervention to fully integrate a computer game implementing Cognitive Behavioural Therapy. Three distinct studies are presented: a detailed evaluation in which therapists independent of the design team used the game with 6 adolescents experiencing clinical anxiety disorders; a study in which a member of the design team used the game with 15 adolescents; and finally a study assessing the acceptability of the game and intervention with 216 practicing therapists. Findings are presented within the context of a framework for the design and evaluation of complex health interventions. The paper provides an in-depth insight into the use of therapeutic games to support adolescent interventions and provides stronger evidence than previously available for both their effectiveness and acceptability to stakeholders.", "keywords": "computer games;evaluations;cognitive behavioural therapy;complex health interventions;adolescent mental health", "title": "exploratory evaluations of a computer game supporting cognitive behavioural therapy for adolescents"} {"abstract": "For optical waveguides with high index contrast and sharp corners, high order full-vectorial mode solvers are difficult to develop, due to the field singularities at the corners. A recently developed method (the so-called BIE-NtD method) based on boundary integral equations (BIEs) and Neumann-to-Dirichlet (NtD) maps achieves high order of accuracy for dielectric waveguides. In this paper, we develop two new BIE mode solvers, including an improved version of the BIE-NtD method and a new BIE-DtN method based on Dirichlet-to-Neumann (DtN) maps. For homogeneous domains with sharp corners, we propose better BIEs to compute the DtN and NtD maps, and new kernel-splitting techniques to discretize hypersingular operators. Numerical results indicate that the new methods are more efficient and more accurate, and work very well for metallic waveguides and waveguides with extended mode profiles.", "keywords": "optical waveguides;boundary integral equations;dirichlet-to-neumann map;neumann-to-dirichlet map;mode solvers;hypersingular integral operators", "title": "Efficient high order waveguide mode solvers based on boundary integral equations"} {"abstract": "One of the significant workloads in current generation desktop processors and mobile devices is multimedia processing. Large on-chip caches are common in modern processors, but large caches will result in increased power consumption and increased access delays. Regular data access patterns in streaming multimedia applications and video processing applications can provide high hit-rates, but due to issues associated with access time, power and energy, caches cannot be made very large. Characterizing and optimizing the memory system is conducive for designing power and performance efficient multimedia application processors. Performance tradeoffs for multimedia applications have been studied in the past, however, power and energy tradeoffs for caches for multimedia processing have not been adequately studied in the past. In this paper, we characterize multimedia applications for I-cache and D-cache power and energy using a multilevel cache hierarchy. Both dynamic and static power increase with increasing cache sizes, however, the increase in dynamic power is small. The increase in static power is significant, and becomes increasingly relevant for smaller feature sizes. There is significant static power dissipation, similar to 45%, in L1 & L2 caches at 70 ram technology sizes, emphasizing the fact that future multimedia systems must be designed by taking leakage power reduction techniques into account. The energy consumption of on-chip L2 caches is seen to be very sensitive to cache size variations. Sizes larger than 16 k for I-caches and 32 k for D-caches will not be efficient choices to maintain power and performance balance. Since multimedia applications spend significant amounts of time in integer operations, to improve the performance, we propose implementing low power full adders and hybrid multipliers in the data path, which results in 9% to 21% savings in the overall power consumption.", "keywords": "cache;leakage power;low power;multimedia workload characterization", "title": "Caches for Multimedia Workloads: Power and Energy Tradeoffs"} {"abstract": "The analysis and modeling of business processes are the basis on which management methodologies, simulation models and information systems are developed. The goal of this paper is to point out the possibility of establishing relationships between processes in supply networks and functioning of the whole system. In this integrated system, all relevant factors for supply network management, both at the global level and at the single process level, could be observed. The idea is to form a process library of the supply network, which would contain process description, inputs, outputs, and the way the process is realized. Every record in the library presents the single instance of that process. The relationships of one process with another depend on process structure and the way of its realization. Every instance of a process represents its realization. The assembly of mutual compatible instances of all processes represents one realization of the supply network. The key problem, triggering the process realization, is solved by specific production expert system. Process realization is very similar to a real system, because the environment influence, uncertainty, and available resources are taken into consideration. As the output, the aggregate of relevant parameters for the evaluation of model functioning are derived. This concept presents the basis of virtual framework for supply network simulation.", "keywords": "supply network;modeling;simulation model;analysis", "title": "Methodology for modeling and analysis of supply networks"} {"abstract": "Leakage of confidential information represents a serious security risk. Despite a number of novel, theoretical advances, it has been unclear if and how quantitative approaches to measuring leakage of confidential information could be applied to substantial, real-world programs. This is mostly due to the high complexity of computing precise leakage quantities. In this paper, we introduce a technique which makes it possible to decide if a program conforms to a quantitative policy which scales to large state-spaces with the help of bounded model checking. Our technique is applied to a number of officially reported information leak vulnerabilities in the Linux Kernel. Additionally, we also analysed authentication routines in the Secure Remote Password suite and of a Internet Message Support Protocol implementation. Our technique shows when there is unacceptable leakage; the same technique is also used to verify, for the first time, that the applied software patches indeed plug the information leaks. This is the first demonstration of quantitative information flow addressing security concerns of real-world industrial programs.", "keywords": "quantitative information flow;linux kernel;information leakage", "title": "quantifying information leaks in software"} {"abstract": "Organ motion due to patient breathing introduces a technical challenge for dosimetry and lung tumor treatment by hadron therapy. Accurate dose distribution estimation requires patient-specific information on tumor position, size, and shape as well as information regarding the material density and stopping power of the media along the beam path. A new 4D dosimetry method was developed, which can be coupled to any motion estimation method. As an illustration, the new method was implemented and tested with a biomechanical model and clinical data.", "keywords": "particle therapy;moving organs;dosimetry;4d-ct", "title": "Four-dimensional radiotherapeutic dose calculation using biomechanical respiratory motion description"} {"abstract": "Hebb postulated cell assemblies as the basic computational elements for understanding cortical processing. He defined them as temporary associations of neurons that organize fast and flexibly into functional units, using correlation-based short-term synaptic plasticity. Based on the properties of spiking neurons, we implement dynamical assemblies that organize completely without synaptic plasticity. Instead, we find varying effective connection strengths that reflect the organizational process. We propose that this dynamic reorganization capabilities ocurring on a fast temporal scale may be a central element of cortical processing.", "keywords": "hebb;spiking neurons;pools;dynamical assemblies;grouping;correlations;coherence;oscillatory activity", "title": "Fast dynamic organization without short-term synaptic plasticity: A new view on Hebb's dynamical assemblies"} {"abstract": "A large variety of systems can be modelled by Petri nets. Their formal semantics are based on linear algebra which in particular allows the Calculation of a Petri net's state space. Since state space explosion is still a serious problem, efficiently calculating, representing, and analysing the state space is mandatory. We propose a formal semantics of Petri nets based on executable relation-algebraic specifications. Thereupon, we suggest how to calculate the markings reachable from a given one simultaneously. We provide an efficient representation of reachability graphs and show in a correct-by-construction approach how to efficiently analyse their properties. Therewith we cover two aspects: modelling and model checking systems by means of one and the same logic-based approach. On a practical side, we explore the power and limits of relation-algebraic concepts for concurrent system analysis. ", "keywords": "relation algebra;petri nets;reachability graph;state space analysis;systems analysis", "title": "State space analysis of Petri nets with relation-algebraic methods"} {"abstract": "Longitudinal imaging studies are frequently used to investigate temporal changes in brain morphology and often require spatial correspondence between images achieved through image registration. Beside morphological changes, image intensity may also change over time, for example when studying brain maturation. However, such intensity changes are not accounted for in image similarity measures for standard image registration methods. Hence, 1) local similarity measures, 2) methods estimating intensity transformations between images, and 3) metamorphosis approaches have been developed to either achieve robustness with respect to intensity changes or to simultaneously capture spatial and intensity changes. For these methods, longitudinal intensity changes are not explicitly modeled and images are treated as independent static samples. Here, we propose a model-based image similarity measure for longitudinal image registration that estimates a temporal model of intensity change using all available images simultaneously.", "keywords": "deformable registration;longitudinal registration;magnetic resonance imaging ;nonuniform appearance change", "title": "Longitudinal Image Registration With Temporally-Dependent Image Similarity Measure"} {"abstract": "Rephasing strategy is one of the main methods used for phase balancing and neutral current reduction in electrical distribution networks and the reconfiguration technique is an effective method for network loss reduction. In this paper, a new method for the simultaneous implementation of reconfiguration and phase balancing strategies is presented as a combinational strategy. In order to solve the proposed optimization problem, Nelder Mead algorithm combined with a bacterial foraging algorithm (BFNM) is used based on a fuzzy multi-objective function. The proposed method allows for the simultaneous execution of reconfiguration and phase balancing while minimizing the interruption cost of rephasing in addition to eliminating network unbalancing and reducing neutral current and network losses. To demonstrate the efficiency of the BFNM algorithm, its performance is compared with bacterial foraging (BF), particle swarm optimization (PSO), genetic and immune algorithms (GA and IA). The proposed method is applied to the IEEE 123-bus test network for evaluation. The simulation results confirm the efficiency of the method in reducing the system costs and network phase balancing.", "keywords": "phase balancing;rephasing strategy;reconfiguration technique;bfnm algorithm;distribution networks", "title": "Simultaneous optimization of phase balancing and reconfiguration in distribution networks using BFNM algorithm"} {"abstract": "In this paper, we propose a method for robust detection of the vowel onset points (VOPs) from noisy speech. The proposed VOP detection method exploits the spectral energy at formant frequencies of the speech segments present in glottal closure region. In this work, formants are extracted by using group delay function, and glottal closure instants are extracted by using zero frequency filter based method. Performance of the proposed VOP detection method is compared with the existing method, which uses the combination of evidence from excitation source, spectral peaks energy and modulation spectrum. Speech data from TIMIT database and noise samples from NOISEX database are used for analyzing the performance of the VOP detection methods. Significant improvement in the performance of VOP detection is observed by using proposed method compared to existing method.", "keywords": "vowel onset point ;formant frequencies;glottal closure region;excitation source;spectral peaks;modulation spectrum", "title": "Vowel onset point detection for noisy speech using spectral energy at formant frequencies"} {"abstract": "Fuzzy operators are an essential tool in many fields and the operation of composition is often needed. In general, composition is not a commutative operation. However, it is very useful to have operators for which the order of composition does not affect the result. In this paper, we analyze when permutability appears. That is, when the order of application of the operators does not change the outcome. We characterize permutability in the case of the composition of fuzzy consequence operators and the dual case of fuzzy interior operators. We prove that for these cases, permutability is completely connected to the preservation of the operator type. We also study the particular case of fuzzy operators induced by fuzzy relations through Zadehs compositional rule and the inf-composition. For this cases, we connect permutability of the fuzzy relations (using the sup-? composition) with permutability of the induced operators. Special attention is paid to the cases of operators induced by fuzzy preorders and similarities. Finally, we use these results to relate the operator induced by the transitive closure of the composition of two reflexive fuzzy relations with the closure of the operator this composition induces.", "keywords": "permutability;fuzzy consequence operator;fuzzy closure operator;fuzzy interior operator;fuzzy preorder;indistinguishability relation", "title": "Permutable fuzzy consequence and interior operators and their connection with fuzzy relations"} {"abstract": "A Process Query System, a new approach to representing and querying multiple hypotheses, is proposed for cross-document co-reference and linking based on existing entity extraction, co-reference and database name-matching technologies. A crucial component of linking entities across documents is the ability to recognize when different name strings are potential references to the same entity. Given the extraordinary range of variation international names can take when rendered in the Roman alphabet, this is a daunting task. The extension of name variant matching to free text will add important text mining functionality for intelligence and security informatics' toolkits.", "keywords": "co-reference;multiple hypothesis tracking;name matching;natural language processors", "title": "Text mining, names and security"} {"abstract": "The extensive computation study was done to elucidate the mechanism of formation dibromoepoxide from cyclohexanone and bromoform. In this reaction, the formation of dihaloepoxide 2 is postulated as a key step that determines the distribution and stereochemistry of products. Two mechanistic paths of reaction were investigated: the addition of dibromocarbene to carbonyl group of ketone, and the addition of tribromomethyl carbanion to the same (C=O) group. The mechanisms for the addition reactions of dibromocarbenes and tribromomethyl carbanions with cyclohexanone have been investigated using ab initio HF/6-311++G** and MP2/6-311+G* level of theory. Solvent effects on these reactions have been explored by calculations which included a continuum polarizable conductor model (CPCM) for the solvent (H2O). The calculations showed that both mechanisms are possible and are exothermic, but have markedly different activation energies.", "keywords": "ab initio calculations;dibromocarbene;dihaloepoxides;reaction mechanisms;tribromomethyl carbanion", "title": "Carbenic vs. ionic mechanistic pathway in reaction of cyclohexanone with bromoform"} {"abstract": "Geographical Information Systems were originally intended to deal with snapshots representing a single state of some reality but there are more and more applications requiring the representation and querying of time-varying information. This work addresses the representation of moving objects on GIS. The continuous nature of movement raises problems for representation in information systems due to the limited capacity of storage systems and the inherently discrete nature of measurement instruments. The stored information has therefore to be partial and does not allow an exact inference of the real-world object's behavior. To cope with this, query operations must take uncertainty into consideration in their semantics in order to give accurate answers to the users. The paper proposes a set of operations to be included in a GIS or a spatial database to make it able to answer queries on the spatio-temporal behavior of moving objects. The operations have been selected according to the requirements of real applications and their semantics with respect to uncertainty is specified. A collection of examples from a case study is included to illustrate the expressiveness of the proposed operations.", "keywords": "moving objects;movement operations;spatio-temporal databases;spatio-temporal uncertainty", "title": "query operations for moving objects database systems"} {"abstract": "Fingerprinting codes are used to prevent dishonest users (traitors) from redistributing digital contents. In this context, codes with the traceability (TA) property and codes with the identifiable parent property (IPP) allow the unambiguous identification of traitors. The existence conditions for IPP codes are less strict than those for TA codes. In contrast, IPP codes do not have an efficient decoding algorithm in the general case. Other codes that have been widely studied but possess weaker identification capabilities are separating codes. It is a well-known result that a TA code is an IPP code, and an IPP code is a separating code. The converse is in general false. However, it has been conjectured that for Reed-Solomon codes all three properties are equivalent. In this paper we investigate this equivalence, providing a positive answer when the number of traitors divides the size of the ground field.", "keywords": "fingerprinting and traitor tracing;identifiable parent property;separating codes;mds codes;reed-solomon codes", "title": "ON THE RELATIONSHIP BETWEEN THE TRACEABILITY PROPERTIES OF REED-SOLOMON CODES"} {"abstract": "Object tracking with occlusion handling is a challenging problem in automated video surveillance. In particular, occlusion handling and tracking have been often considered as separate modules. This paper proposes a tracking method in the context of video surveillance, where occlusions are automatically detected and handled to solve ambiguities. Hence, the tracking process can continue to track the different moving objects correctly. The proposed approach is based on sub-blobbing, that is, blobs representing moving objects are segmented into sections whenever occlusions occur. These sub-blobs are then treated as blobs with the occluded ones ignored. By doing so, the tracking of objects has become more accurate and less sensitive to occlusions. We have also used a feature-based framework for identifying the tracked objects, where several flexible attributes were involved. Experiments on several videos have clearly demonstrated the success of the proposed method.", "keywords": "occlusion handling;features;video surveillance;object tracking", "title": "occlusion handling based on sub-blobbing in automated video surveillance system"} {"abstract": "With the advent of Web 2.0 and the emergence of improved technologies to enhance UI, the importance of user experience and intuitiveness of Web interfaces led to the growth and success of Interaction Design. Web designers often turn to pre-defined and well-founded design patterns and user interaction paradigms to build novel and more effective Web interfaces. The rational behind Interaction Design patterns is based on user behavior and Web navigation studies. The \"semantics\" of user interaction is therefore a rich and interesting area that is worth exploring in association with traditional Semantic Web approaches. In this paper, we present our first attempts of an ontological formalization of interaction patterns and its implications. To prove our concept, we illustrate the mapping approach we employed to put in relation that interaction formalization with data-specific ontologies, to create Web interfaces to browse and navigate that specialized kind of information; the aforementioned ontologies and mapping rules are the basis of the internal operation of a Semantic Web application framework called STAR:chart, leveraged to build the Service-Finder portal; finally, we present our evaluation results.", "keywords": "web interfaces;semantics;semantic web;interaction semantics", "title": "towards the formalization of interaction semantics"} {"abstract": "Possible links between FoMO, social media engagement, and three motivational constructs were examined. A new scale was designed to measure the extent to which students used social media tools in the classroom. The links between social media engagement and motivational factors were mediated by FoMO.", "keywords": "fear of missing out;social media engagement;self-determination theory;academic motivation;higher education", "title": "College students academic motivation, media engagement and fear of missing out"} {"abstract": "Recommender systems apply knowledge discovery techniques to the problem of making personalized recommendations for products or services during a live interaction. These systems, especially collaborative filtering based on user, are achieving widespread success on the Web. The tremendous growth in the amount of available information and the kinds of commodity to Web sites in recent years poses some key challenges for recommender systems. One of these challenges is ability of recommender systems to be adaptive to environment where users have many completely different interests or items have completely different content (We called it as Multiple interests and Multiple-content problem). Unfortunately, the traditional collaborative filtering systems can not make accurate recommendation for the two cases because the predicted item for active user is not consist with the common interests of his neighbor users. To address this issue we have explored a hybrid collaborative filtering method, collaborative filtering based on item and user techniques, by combining collaborative filtering based on item and collaborative filtering based on user together. Collaborative filtering based on item and user analyze the user-item matrix to identify similarity of target item to other items, generate similar items of target item, and determine neighbor users of active user for target item according to similarity of other users to active user based on similar items of target item. In this paper we firstly analyze limitation of collaborative filtering based on user and collaborative filtering based on item algorithms respectively and emphatically make explain why collaborative filtering based on user is not adaptive to Multiple-interests and Multiplecontent recommendation. Based on analysis, we present collaborative filtering based on item and user for Multiple-interests and Multiple-content recommendation. Finally, we experimentally evaluate the results and compare them with collaborative filtering based on user and collaborative filtering based on item, respectively. The experiments suggest that collaborative filtering based on item and user provide better recommendation quality than collaborative filtering based on user and collaborative filtering based on item dramatically. ", "keywords": "collaborative filtering;recommender systems;personalization;e-commerce", "title": "A hybrid collaborative filtering method for multiple-interests and multiple-content recommendation in E-Commerce"} {"abstract": "Covering and packing integer programs model a large family of combinatorial optimization problems. The current-best approximation algorithms for these are an instance of the basic probabilistic method: showing that a certain randomized approach produces a good approximation with positive probability. This approach seems inherently sequential; by employing the method of alteration we present the first RNC and NC approximation algorithms that match the best sequential guarantees. Extending our approach, we get the first RNC and NC approximation algorithms for certain multi-criteria versions of these problems. We also present the first NC algorithms for two packing and covering problems that are not subsumed by the above result: finding large independent sets in graphs, and rounding fractional Group Steiner solutions on trees.", "keywords": "approximation algorithms;method;families;approximation;matching;trees;graph;probability;version;group;combinatorial optimization;model;algorithm;randomization;posit", "title": "new approaches to covering and packing problems"} {"abstract": "Stage-transition models based on the American Diagnostic and Statistical Manual (DSM) generally are applied in epidemiology and genetics research on drug dependence syndromes associated with cannabis, cocaine, and other internationally regulated drugs (IRDs). Difficulties with DSM stage-transition models have surfaced during cross-national research intended to provide a truly global perspective, such as the work of the World Mental Health Surveys Consortium. Alternative simpler dependence-related phenotypes are possible, including population-level count process models for steps early and before coalescence of clinical features into a coherent syndrome (e.g., zero-inflated Poisson [ZIP] regression). Selected findings are reviewed, based on ZIP modeling of alcohol, tobacco, and IRD count processes, with an illustration that may stimulate new research on genetic susceptibility traits. The annual National Surveys on Drug Use and Health (NSDUH) can be readily modified for this purpose, along the lines of a truly anonymous research approach that can help make NSDUH-type cross-national epidemiological surveys more useful in the context of subsequent genomewide association (GWAS) research and post-GWAS investigations with a truly global health perspective.", "keywords": "alcohol;tobacco;dependence;epidemiology;phenotype", "title": "Novel phenotype issues raised in cross-national epidemiological research on drug dependence"} {"abstract": "In this paper, we report TCAD study on gate-all-around cylindrical (GAAC) transistor for sub-10-nm scaling. The GAAC transistor device physics, TCAD simulation, and proposed fabrication procedure have been discussed. Among all other novel fin field effect transistor (FinFET) devices, the gate-all-around cylindrical device can be particularly used for reducing the problems of conventional multi-gate FinFET, improving device performance, and scaling-down capabilities. With gate-all-around cylindrical architecture, the transistor is controlled essentially by infinite number of gates surrounding the entire cylinder-shaped channel. Electrical integrity within the channel is improved by reducing the leakage current due to the non-symmetrical field accumulation such as the corner effect. Our proposed fabrication procedure for making devices having the gate-all-around cylindrical (GAAC) device architecture is also discussed.", "keywords": "gate-all-around cylindrical transistor;device physics;tcad simulation;fabrication procedure", "title": "TCAD study on gate-all-around cylindrical (GAAC) transistor for CMOS scaling to the end of the roadmap"} {"abstract": "In this paper, we propose a compact rule processing engine to process acceleration data on a small sensor device. Our proposed engine enables us to develop applications using acceleration data on the small device with a quite simple and short description. We describe the outline of both our proposed rule engine and an implementation on our developed sensor device called the Mo-Co-Mi chip.", "keywords": "ubiquitous computing;sensor node", "title": "a rule engine to process acceleration data on small sensor nodes"} {"abstract": "MUG1 is a compiler generating system developed and implemented at the Technical University of Munich. The structure of the system and the concepts used in the compiler description are presented. Special emphasis is laid on the use of MUG1 as a tool for the incremental design of programming languages and the construction of their compilers in parallel.", "keywords": "concept;tool;compilation;use;structure;systems;parallel;design;programming language;incremental", "title": "mug1 - an incremental compiler-compiler"} {"abstract": "The availability of commodity multiprocessors and high-speed networks of workstations offer significant opportunities for addressing the increasing computational requirements of optimization applications. To leverage these potential benefits, it is important, however, to make parallel and distributed processing easily accessible to a wide audience of optimization programmers. This paper addresses this challenge by proposing parallel and distributed programming abstractions that keep the distance from sequential local search algorithms as small as possible. The abstractions, including parallel loops, interruptions, thread pools, and shared objects, are compositional and cleanly separate the optimization program and the parallel instructions. They have been evaluated experimentally on a variety of applications, including warehouse location and coloring, for which they provide significant speedups.", "keywords": "combinatorial optimization;local search;constraint programming;parallel;distributed;language", "title": "Parallel and distributed local search in COMET"} {"abstract": "Malignant melanoma has been thought to be related mainly to exposure to the sun or radiation. A review of the scientific literature reveals many significant correlations between benzene and benzene-containing solvents in the workplace and the occurrence of malignant melanoma, particularly in sites that have never been exposed to sunlight. A comparison of positive correlations between such exposure and malignant melanoma by independent investigators and negative findings by investigators with industry affiliations reveals that this difference, at least in part, may account for the discrepant findings. Based on independent studies, it is reasonable to conclude that malignant melanoma is causally related to employment-related chemical exposures in the petroleum refining industry", "keywords": "benzene;malignant melanoma;industry studies;independent investigators;petroleum industry;chemical carcinogenesis;cutaneous malignancies", "title": "Causal Relationship from Exposure to Chemicals in Oil Refining and Chemical Industries and Malignant Melanoma"} {"abstract": "A control procedure for wind farms connected to a unique converter is presented. The control method is based on vector control, providing high performance. The system operates in the maximum efficiency area due to the use of a MPPT. A power reduction method used in case of an electrical contingency is described. The proposed wind farm layout claims to improve the efficiency and the reliability.", "keywords": "wind power generation;high voltage direct current ;variable frequency wind farm;offshore wind power;wind turbine cluster", "title": "Control of a wind turbine cluster based on squirrel cage induction generators connected to a single VSC power converter"} {"abstract": "Collaborative recommender systems select potentially interesting items for each user based on the preferences of like-minded individuals. Particularly, e-commerce has become a major domain in these research field due to its business interest, since identifying the products the users may like or find useful can boost consumption. During the last years, a great number of works in the literature have focused in the improvement of these tools. Expertise, trust and reputation models are incorporated in collaborative recommender systems to increase their accuracy and reliability. However, current approaches require extra data from the users that is not often available. In this paper, we present two contributions that apply a semantic approach to improve recommendation results transparently to the users. On the one hand, we automatically build implicit trust networks in order to incorporate trust and reputation in the selection of the set of like-minded users that will drive the recommendation. On the other hand, we propose a measure of practical expertise by exploiting the data available in any e-commerce recommender system - the consumption histories of the users. ", "keywords": "personalized e-commerce;semantic reasoning;collaborative filtering;trust;reputation;expertise", "title": "Semantic inference of user's reputation and expertise to improve collaborative recommendations"} {"abstract": "Multi-label classification and knowledge-based approach to image annotation. The definition of the fuzzy knowledge representation scheme based on FPN. Novel data-driven algorithms for automatic acquisition of fuzzy knowledge. Novel inference based algorithms for annotation refinement and scene recognition. A comparison of inference-based scene classification with an ordinary approach.", "keywords": "image annotation;knowledge representation;inference algorithms;fuzzy petri net;multi-label image classification", "title": "Two-tier image annotation model based on a multi-label classifier and fuzzy-knowledge representation scheme"} {"abstract": "Grading systems based on competition ranking usually limit the grade distribution. We propose a methodology based on a betting system to relax the ranking restrictions. Betting assesses the skill to critically analyze source code. A case study in a video game development course validates our proposal.", "keywords": "assessment;gamification;competition;software development;code review", "title": "Betting system for formative code review in educational competitions"} {"abstract": "An error correcting code is said to be locally testable if there is a test that checks whether a given string is a codeword, or rather far from the code, by reading only a constant number of symbols of the string. While the best known construction of locally testable codes (LTCs) by Ben-Sasson and Sudan [SIAM J. Comput., 38 (2008), pp. 551-607] and Dinur [J. ACM, 54 (2007), article 12] achieves very efficient parameters, it relies heavily on algebraic tools and on probabilistically checkable proof (PCP) machinery. In this work we present a new and arguably simpler construction of LTCs that is purely combinatorial, does not rely on PCP machinery, and matches the parameters of the best known construction. However, unlike the latter construction, our construction is not entirely explicit.", "keywords": "locally testable codes ;probabilistically checkable proofs ;pcps of proximity ", "title": "COMBINATORIAL CONSTRUCTION OF LOCALLY TESTABLE CODES"} {"abstract": "Traditional slope stability analysis involves predicting the location of the critical slip surface for a given slope and computing a safety factor at that location. However, for some slopes with complicated stratigraphy several distinct critical slip surfaces can exist. Furthermore, the global minimum safety factor in some cases can be less important than potential failure zones when rehabilitating or reinforcing a slope. Existing search techniques used in slope stability analysis cannot find all areas of concern, but instead converge exclusively on the critical slip surface. This paper therefore proposes the use of a holistic multi modal optimisation technique which is able to locate and converge to multiple failure modes simultaneously. The search technique has been demonstrated on a number of benchmark examples using both deterministic and probabilistic analysis to find all possible failure mechanisms, and their respective factors of safety and reliability indices. The results from both the deterministic and probabilistic models show that the search technique is effective in locating the known critical slip surface while also establishing the locations of any other distinct critical slip surfaces within the slope. The approach is of particular relevance for investigating the stability of large slopes with complicated stratigraphy, as these slopes are likely to contain multiple failure mechanisms.", "keywords": "multi-modal failure;probabilistic analysis;deterministic analysis;slope stability;multi-modal optimisation", "title": "Deterministic and probabilistic multi-modal analysis of slope stability"} {"abstract": "The epithelium in inflamed intestinal segments of patients with Crohn's disease is characterized by a reduction of tight junction strands, strand breaks, and alterations of tight junction protein content and composition. In ulcerative colitis, epithelial leaks appear early due to micro-erosions resulting from upregulated epithelial apoptosis and in addition to a prominent increase of claudin-2. Th1-cytokine effects by interferon-? in combination with TNF? are important for epithelial damage in Crohn's disease, while interleukin-13 (IL-13) is the key effector cytokine in ulcerative colitis stimulating apoptosis and upregulation of claudin-2 expression. Focal lesions caused by apoptotic epithelial cells contribute to barrier disturbance in IBD by their own conductivity and by confluence toward apoptotic foci or erosions. Another type of intestinal barrier defect can arise from ?-hemolysin harboring E. coli strains among the physiological flora, which can gain pathologic relevance in combination with proinflammatory cytokines under inflammatory conditions. On the other hand, intestinal barrier impairment can also result from transcellular antigen translocation via an initial endocytotic uptake into early endosomes, and this is intensified by proinflammatory cytokines as interferon-? and may thus play a relevant role in the onset of IBD. Taken together, barrier defects contribute to diarrhea by a leak flux mechanism (e.g., in IBD) and can cause mucosal inflammation by luminal antigen uptake. Immune regulation of epithelial functions by cytokines may cause barrier dysfunction not only by tight junction impairments but also by apoptotic leaks, transcytotic mechanisms, and mucosal gross lesions.", "keywords": "apoptosis;barrier function;claudins;crohn's disease;inflammatory bowel disease;interleukin-13;tight junction;tumor necrosis factor-alpha;ulcerative colitis", "title": "Epithelial Tight Junctions in Intestinal Inflammation"} {"abstract": "The paradigm of perceptual fusion provides robust solutions to computer vision problems. By combining the outputs of multiple vision modules, the assumptions and constraints of each module are factored out to result in a more robust system overall. The integration of different modules can be regarded as a form of data fusion. To this end, we propose a framework for fusing different information sources through estimation of covariance from observations. The framework is demonstrated in a face and 3D pose tracking system that fuses similarity-to-prototypes measures and skin colour to track head pose and face position. The use of data fusion through covariance introduces constraints that allow the tracker to robustly estimate head pose and track face position simultaneously. ", "keywords": "data fusion;pose estimation;similarity representation;face recognition", "title": "Fusion of perceptual cues for robust tracking of head pose and position"} {"abstract": "Models containing recurrent connections amongst the cells within a population can account for a range of empirical data on orientation selectivity in striate cortex. However, existing recurrent models are unable to veridically encode more than one orientation at a time. Underlying this inability is an inherent limitation in the variety of activity profiles that can be stably maintained. We propose a new recurrent model that can form a broader range of stable population activity patterns. We demonstrate that these patterns preserve information about multiple orientations present in the population inputs. This preservation has significant computational consequences when information encoded in several populations must be integrated to perform behavioral tasks, such as visual discrimination.", "keywords": "population codes;orientation selectivity;recurrent network models", "title": "Encoding multiple orientations in a recurrent network"} {"abstract": "This paper describes an automated modular fixture design system developed using a CAD-based methodology and implemented on a 3-D CAD/CAM software package. The developed automated fixture design (AFD) system automates the fixturing points determination and is integrated on top of the previously developed interactive and semi-automated fixture design systems. The determination of fixturing points is implemented in compliance with the fixturing principles that are formulated as heuristics rules to generate candidate list of points and then select the exact points from the list. Apart from determining the fixturing points automatically, the system is capable of producing cutting tool collision-free fixture design using its machining interference detection sub-module. The machining interference detection is accomplished through the use of cutter swept solid based on cutter swept volume approach. Therefore, using the developed AFD, an interference-free fixture design and assembly can be achieved in the possible shortest design lead-time.", "keywords": "modular fixture;automated fixture design;machining interference;cutter swept volume approach", "title": "An automated design and assembly of interference-free modular fixture setup"} {"abstract": "Impagliazzo and Wigderson (1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP not equal BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result. In this paper: We obtain an optimal worst-case to average-case connection for EXP: if EXP not subset of BPTIME(t(n)), then EXP has problems that cannot be solved on a fraction 1/2 + 1/t'(n) of the inputs by BPTIME(t'(n)) algorithms, for t' = t(Omega(1)). We exhibit a PSPACE-complete self-correctible and downward self-reducible problem. This slightly simplifies and strengthens the proof of Impagliazzo and Wigderson, which used a #P-complete problem with these properties. We argue that the results of lmpagliazzo and Wigderson, and the ones in this paper, cannot be proved via \"black-box\" uniform reductions.", "keywords": "pseudorandomness;average-case complexity;derandomization;instance checkers", "title": "Pseudorandomness and average-case complexity via uniform reductions"} {"abstract": "Galectins are animal lectins that bind to ?-galactosides, such as lactose and N-acetyllactosamine, in free form or contained in glycoproteins or glycolipids. They are located intracellularly or extracellularly. In the latter they exhibit bivalent or multivalent interactions with glycans on cell surfaces and induce various cellular responses, including production of cytokines and other inflammatory mediators, cell adhesion, migration, and apoptosis. Furthermore, they can form lattices with membrane glycoprotein receptors and modulate receptor properties. Intracellular galectins can participate in signaling pathways and alter biological responses, including apoptosis, cell differentiation, and cell motility. Current evidence indicates that galectins play important roles in acute and chronic inflammatory responses, as well as other diverse pathological processes. Galectin involvement in some processes in vivo has been discovered, or confirmed, through studies of genetically engineered mouse strains, each deficient in a given galectin. Current evidence also suggests that galectins may be therapeutic targets or employed as therapeutic agents for these inflammatory responses.", "keywords": "galectins;inflammation;allergic inflammation;autoimmune disease;atherosclerosis", "title": "Galectins in acute and chronic inflammation"} {"abstract": "Careful design and verification of the power distribution network of a chip are of critical importance to ensure its reliable performance. With the increasing number of transistors on a chip, the size of the power network has grown so large as to make the verification task very challenging. The available computational power and memory resources impose limitations on the size of networks that can be analyzed using currently known techniques. Many of today's designs have power networks that are too large to be analyzed in the traditional way as flat networks. In this paper, we propose a hierarchical analysis technique to overcome the aforesaid capacity limitation. We present a new technique for analyzing a power grid using macromodels that are created for a set of partitions of the grid. Efficient numerical techniques for the computation and sparsification of the port admittance matrices of the macromodels are presented. A novel sparsification technique using a 0-1 integer linear programming formulation is proposed to achieve superior sparsification for a specified error. The run-time and memory efficiency of the proposed method are illustrated on industrial designs. It is shown that even for a 60 million-node power grid, our approach allows for an efficient analysis, whereas previous approaches have been unable to handle power grids of such size.", "keywords": "circuit simulation;ir drop;matrix sparsification;partitioning;power distribution networks;power grid;signal integrity", "title": "Hierarchical analysis of power distribution networks"} {"abstract": "Ultra Wide Bandwidth (UWB) spread-spectrum techniques will play a key role in short range wireless connectivity supporting high bit rates availability and low power consumption. UWB can be used in the design of wireless local and personal area networks providing advanced integrated multimedia services to nomadic users within hot-spot areas. Thus the assessment of the possible interference caused by UWB devices on already existing narrowband and wideband systems is fundamental to ensure nonconflicting coexistence and, therefore, to guarantee acceptance of UWB technology worldwide. In this paper, we study the coexistence issues between an indoor UWB-based system (hot-spot) and outdoor point to point (PP) links and Fixed Wireless Access (FWA) systems operating in the 3.5 - 5.0 GHz frequency range. We consider a realistic UWB master/slave system architecture and we show through computer simulation, that in all practical cases UWB system can coexist with PP and FWA without causing any dangerous interference.", "keywords": "4g communication systems;spread spectrum;ultra wide band", "title": "On the interference of ultra wide band systems on point to point links and fixed wireless access systems"} {"abstract": "This paper describes the design and analysis of the scheduling algorithm for energy conserving medium access control (EC-MAC), which is a low-power medium access control (MAC) protocol for wireless and mobile ATM networks. We evaluate the scheduling algorithms that have been proposed for traditional ATM networks. Based on the structure of EC-MAC and the characteristics of wireless channel, we propose a new algorithm that can deal with the burst errors and the location-dependent errors. Most scheduling algorithms proposed for either wired or wireless networks were analyzed with homogeneous traffic or multimedia services with simplified traffic models. We analyze our scheduling algorithm with more realistic multimedia traffic models based on H. 263 video traces and self-similar data traffic. One of the key goals of the scheduling algorithms is simplicity and fast implementation. Unlike the time-stamped based algorithms, our algorithm does not need to sort the virtual time, and thus, the complexity of the algorithm is reduced significantly.", "keywords": "low-power operation;multiple access methods;queuing and scheduling algorithms;wireless and mobile atm;wireless multimedia communications", "title": "Scheduling Multimedia Services in a Low-Power MAC for Wireless and Mobile ATM Networks"} {"abstract": "A well-known transformation by Pearn, Assad and Golden reduces a capacitated arc routing problem (CARP) into an equivalent capacitated vehicle routing problem (CVRP). However, that transformation is regarded as unpractical, since an original instance with r required edges is turned into a CVRP over a complete graph with 3r+1 3 r + 1 vertices. We propose a similar transformation that reduces this graph to 2r+1 2 r + 1 vertices, with the additional restriction that a previously known set of r pairwise disconnected edges must belong to every solution. Using a recent branch-and-cut-and-price algorithm for the CVRP, we observed that it yields an effective way of attacking the CARP, being significantly better than the exact methods created specifically for that problem. Computational experiments obtained improved lower bounds for almost all open instances from the literature. Several such instances could be solved to optimality. Scope and purpose The scope of this paper is transforming arc routing problems into node routing problems. The paper shows that this approach can be effective and, in particular, that the original instances may generate node routing instances that behave as if the size is not increased. This result is obtained by slightly modifying the well-known transformation by Pearn, Assad and Golden from capacitated arc routing problem (CARP) to the capacitated vehicle routing problem (CVRP), that is regarded as unpractical. The paper provides a computational experience using a recent branch-and-cut-and-price algorithm for the CVRP. The results are significantly better than the exact methods created specifically for that problem, improving lower bounds for almost all open instances from the literature. Several such instances could be solved to optimality.", "keywords": "arc routing;mixed-integer programming", "title": "Solving capacitated arc routing problems using a transformation to the CVRP"} {"abstract": "The car sequencing problem involves scheduling cars along an assembly line while satisfying capacity constraints. In this paper, we describe an Ant Colony Optimization (ACO) algorithm for solving this problem, and we introduce two different pheromone structures for this algorithm: the first pheromone structure aims at learning for good sequences of cars, whereas the second pheromone structure aims at learning for critical cars. We experimentally compare these two pheromone structures, that have complementary performances, and show that their combination allows ants to solve very quickly most instances.", "keywords": "ant colony optimization;car sequencing problem;multiple pheromone structures", "title": "Combining two pheromone structures for solving the car sequencing problem with Ant Colony Optimization"} {"abstract": "The purpose of this study is to investigate how information users view the concept of relevance and make their judgement(s) on relevant information through the framework of social representations theory. More specifically, this study attempts to address the questions of what users view as the constituent concepts of relevance, what are core and peripheral concepts of relevance, and how these concepts are structured by applying a structural analysis approach of social representations theory. We employ a free word association method for data collection. Two hundred and forty four information users of public and academic libraries responded to questionnaires on their relevance judgement criteria. Collected data were content analysed and assessed using weighted frequency, similarity measure, and core/periphery measurements to identify key elements of relevance and to differentiate core and periphery elements of relevance. Results show that four out of 26 emerged elements (concepts) are core and 22 are periphery elements of the concept of relevance. The findings of this study provide a quantitative measure of weighing various elements of relevance and the internal structure of the concept of relevance from users' perspectives providing enhancements for search algorithms with quantitative metadata support.", "keywords": "relevance;relevance criteria;social representations;structural analysis;core-periphery analysis", "title": "Calibrating information users' views on relevance: A social representations approach"} {"abstract": "A communication tree is a binomial tree embedded in a hypercube, whose communication direction is from its leaves to its root. If a problem to be solved is first divided into independent subproblems, then each subproblem can be solved by one of the hypercube processors, and all the subresults can be merged into the final results through tree communication. This paper uses two random search techniques, the genetic algorithm (GA) and simulated annealing (SA), to construct fault-tolerant communication trees with the minimum data transmission time. Experimental evaluation shows that, with reasonably low search time, the proposed GA and SA approaches are able to find more desirable communication trees (i.e., trees with less data transmission time) than the minimal cost approach can. A distributed approach which applies parallel search to communication subtrees in disjoint subcubes is also provided to reduce the search time of the proposed approaches.", "keywords": "fault-tolerant communication trees;hypercubes;genetic algorithms;simulated annealing;data transmission time;search time;maximal fault-free subcubes", "title": "Constructing fault-tolerant communication trees in hypercubes"} {"abstract": "Technology can improve the quality of life for elderly persons by supporting and facilitating the unique leadership roles that elderly play in groups, communities, and other organizations. Elderly people are often organizational firekeepers. They maintain community memory, pass on organizational practices, and ensure social continuity. This paper reports studies of several essential community roles played by elderly community membersincluding the role of volunteer community webmasterand describes two positive design projects that investigated how technology can support new kinds of social endeavors and contributions to society by elderly citizens. Finally, the paper speculates on the utility of intergenerational teams in strengthening societys workforce.", "keywords": "aging;elderly;positive design;non-profit community-based groups;intergenerational teams", "title": "The firekeepers: aging considered as a resource"} {"abstract": "This paper proposes a novel algorithm for temporal decomposition (TD) of speech, called limited error based event localizing temporal decomposition (LEBEL-TD), and its application to variable-rate speech coding. In previous work with TD, TD analysis was usually performed on each speech segment of about 200300ms or more, making it impractical for online applications. In this present work, the event localization is determined based on a limited error criterion and a local optimization strategy, which results in an average algorithmic delay of 65ms. Simulation results show that an average log spectral distortion of about 1.5dB can be achievable at an event rate of 20events/s. Also, LEBEL-TD uses neither the computationally costly singular value decomposition routine nor the event refinement process, thus reducing significantly the computational cost of TD. Further, a method for variable-rate speech coding an average rate of around 1.8kbps based on STRAIGHT (Speech Transformation and Representation using Adaptive Interpolation of weiGHTed spectrum), which is a high-quality speech analysissynthesis framework, using LEBEL-TD is also realized. Subjective test results indicate that the performance of the proposed speech coding method is comparable to that of the 4.8kbps FS-1016 CELP coder.", "keywords": "temporal decomposition;event vector;event function;straight;speech coding;line spectral frequency", "title": "Limited error based event localizing temporal decomposition and its application to variable-rate speech coding"} {"abstract": "This study presents results of a survey about social network website (SNW) usage that was administered to university students in China, Egypt, France, Israel, India, Korea, Macao, Sweden, Thailand, Turkey, and the United States. The offline and online social ties of SNW users were examined by nationality, levels of individualism-collectivism (I-C), gender, SNW usage, age, and access location. Contrary to existing literature, we found no differences in the number of offline friends between individualist and collectivist nations. Similarly, there was not a difference in the number of online social ties between individualist and collectivist nations. However, members of collectivist nations had significantly more online social ties never met in person. Heavy SNW users in individualist nations maintained significantly higher numbers of offline social ties; however, heavy SNW users in collectivist nations did not have higher numbers of offline social ties. Related implications and recommendations are provided.", "keywords": "social ties;online social ties;individualism;collectivism;social networking websites", "title": "ONLINE AND OFFLINE SOCIAL TIES OF SOCIAL NETWORK WEBSITE USERS: AN EXPLORATORY STUDY IN ELEVEN SOCIETIES"} {"abstract": "When sensor networks deployed in unattended and hostile environments, for securing communication between sensors, secret keys must be established between them. Many key establishment schemes have been proposed for large scale sensor networks. In these schemes, each sensor shares a secret key with its neighbors via preinstalled keys. But it may occur that two end nodes which do not share a key with each other could use a secure path to share a secret key between them. However during the transmission of the secret key, the secret key will be revealed to each node along the secure path. Several researchers proposed a multi-path key establishment to prevent a few compromised sensors from knowing the secret key, but it is vulnerable to stop forwarding or Byzantine attacks. To counter these attacks, we propose a hop by hop authentication scheme for path key establishment to prevent Byzantine attacks. Compared to conventional protocols, our proposed scheme can mitigate the impact of malicious nodes from doing a Byzantine attack and sensor nodes can identify the malicious nodes. In addition, our scheme can save energy since it can detect and filter false data not beyond two hops. ", "keywords": "byzantine attacks;path key establishment;security;wireless sensor networks", "title": "Pair-wise path key establishment in wireless sensor networks"} {"abstract": "The evidence suggests that human actions are supported by emotional elements that complement logic inference in our decision-making processes. In this paper an exploratory study is presented providing initial evidence of the positive effects of emotional information on the ability of intelligent agents to create better models of user actions inside smart-homes. Preliminary results suggest that an agent incorporating valence-based emotional data into its input array can model user behaviour in a more accurate way than agents using no emotion-based data or raw data based on physiological changes.", "keywords": "emotion detection;ambient intelligence;artificial neural networks;fuzzy controllers", "title": "Affect-aware behaviour modelling and control inside an intelligent environment"} {"abstract": "A new combination rule based on Dezert-Smarandache theory (DSmT) is proposed to deal with the conflict evidence resulting from the non-exhaustivity of the discernment frame. A two-dimensional measure factor in Dempster-Shafer theory (DST) is extended to DSmT to judge the conflict degree between evidence. The original DSmT combination rule or new DSmT combination rule can be selected for fusion according to this degree. Finally, some examples in simultaneous fault diagnosis of motor rotor are given to illustrate the effectiveness of the proposed combination rule.", "keywords": "dsmt rule of combination;open frame of discernment;evidence conflict;simultaneous faults diagnosis;generalized basic probability assignment", "title": "A new DSmT combination rule in open frame of discernment and its application"} {"abstract": "The interactive experiences of players in networked games can be enhanced with the provision of an Immersive Voice Communication Service. Game players are immersed in their voice communication experience as they exchange live voice streams which are rendered in real-time with directional and distance cues corresponding to the users' positions in the virtual game world. In particular, we propose a Mobile Immersive Communication Environment (MICE) which targets mobile game players using platforms such as Sony PSP and Nintendo DS. A computation reduction scheme was proposed in our previous work for the scalable delivery of MICE from a central server. On the basis of that computation reduction scheme, this paper identifies what factors, and to what extent, affect the unacceptable voice rendering error incurred when providing MICE. In the first experimental scenario, we investigate the level of unacceptable voice rendering error incurred in MICE for different avatar densities or avatar population sizes, with a fixed level of processing limit. In the second experimental scenario, we studied the level of unacceptable voice rendering error incurred in MICE for different processing resource limits, with a fixed avatar population size or avatar density. Our findings provide important insights into the planning and dimensioning of processing resources for the support of MICE, with due considerations to the impact on the unacceptable voice rendering error incurred.", "keywords": "computation cost reduction;immersive voice communications;voice over ip ;mobile gaming", "title": "trading off computation for error in providing immersive voice communications for mobile gaming"} {"abstract": "Evacuation planning is of critical importance for civil authorities to prepare for natural disasters, but efficient evacuation planning in large city is computationally challenging due to the large number of evacuees and the huge size of transportation networks. One recently proposed algorithm Capacity Constrained Route Planner (CCRP) can give sub-optimal solution with good accuracy in less time and use less memory compared to previous approaches. However, it still can not scale to large networks. In this paper, we analyze the overhead of CCRP and come to a new heuristic CCRP++ that scalable to large network. Our algorithm can reuse search results in previous iterations and avoid the repetitive global shortest path expansion in CCRP. We conducted extensive experiments with real world road networks and different evacuation parameter settings. The result shows it can gives great speed-up without loosing the optimality.", "keywords": "evacuation planning;ccrp;shortest path", "title": "a scalable heuristic for evacuation planning in large road network"} {"abstract": "A battery energy storage system to support the frequency in autonomous microgrids. Original frequency controller to better damp the frequency oscillations. The frequency controller covers the main two control levels, namely primary and secondary. Enhanced control functions to ensure uninterruptible power supply to local sensitive loads. Simulations and experimental results validate the proposed control solution.", "keywords": "battery energy storage system;microgrid;frequency control;single-phase inverter", "title": "Battery energy storage system for frequency support in microgrids and with enhanced control features for uninterruptible supply of local loads"} {"abstract": "This paper presents a new extended average magnitude difference function for noise robust pitch detection. Average magnitude difference function based algorithms are suitable for real time operations, but suffer from incorrect pitch detection in noisy conditions. The proposed new extended average magnitude difference function involves in sufficient number of averaging for all lag values compared to the original average magnitude difference function, and thereby eliminates the falling tendency of the average magnitude difference function without emphasizing pitch harmonics at higher lags, which is a severe limitation of other existing improvements of the average magnitude difference function. A noise robust post processing that explores the contribution of each frequency channel is also presented. Experimental results on Keele pitch database in different noise level, both with white and color noise, shows the superiority of the proposed extended average magnitude difference function based pitch detection method over other methods based on average magnitude difference function.", "keywords": "pitch detection;amdf;eamdf;noise robust", "title": "Extended Average Magnitude Difference Function Based Pitch Detection"} {"abstract": "The challenge for the microarchitect has always been (with very few notable domain-specific exceptions) how to translate the continually increasing processing power provided by Moore's Law into increased performance, or more recently into similar performance at lower cost in energy. The mechanisms in the past (almost entirely) kept the interface intact and used the increase in transistor count to improve the performance of the microarchitecture of the uniprocessor. When that became too hard, we went to larger and larger on-chip caches. Both are consistent with the notion that \"abstractions are good.\" At some point, we got overwhelmed with too many transistors; predictably, multi-core was born. As the transistor count continues to skyrocket, we are faced with two questions: what should be on the chip, and how should the software interface to it. If we expect to continue to take advantage of what process technology is providing, I think we need to do several things, starting with rethinking the notion of abstraction and providing multiple interfaces for the programmer.", "keywords": "multicore;software interface;design;performance", "title": "multi-core demands multi-interfaces"} {"abstract": "The concept of transfer of learning holds that previous practice or experience in one task or domain will enable successful performance in another related task or domain. In contrast, specificity of learning holds that previous practice or experience in one task or domain does not transfer to other related tasks or domains. The aim of the current study is to examine whether decision-making skill transfers between sports that share similar elements, or whether it is specific to a sport. Participants (n=205) completed a video-based temporal occlusion decision-making test in which they were required to decide on which action to execute across a series of 4 versus 4 soccer game situations. A sport engagement questionnaire was used to identify 106 soccer players, 43 other invasion sport players and 58 other sport players. Positive transfer of decision-making skill occurred between soccer and other invasion sports, which are related and have similar elements, but not from volleyball, supporting the concept of transfer of learning.", "keywords": "cognitive processes;knowledge;skill acquisition;perceptualcognitive skill", "title": "Decisions, decisions, decisions: transfer and specificity of decision-making skill between sports"} {"abstract": "Cloud computing is a paradigm that focuses on sharing data and computing resources over a scalable network of nodes, so it is becoming a preferred environment for those applications with large scalability, dynamic collaboration and elastic resource requirements. Creative computing is an emerging research field in these applications, which can be considered as the study of computer science and related technologies and how they are applied to support creativity, take part in creative processes, and solve creativity related problems. However, it is a very hard work to develop such applications from the very beginning under new environment, while it is a big waste for legacy systems under existing environment. Now software evolution plays an important role. In this paper, we introduced creative computing firstly, including definition, properties and requirements. Then the advantages of cloud computing platform for supporting creative computing were analysed. Next, a private cloud as experimental environment was built. Finally, the process of creative application evolution was illustrated. Our work is about research and application of software evolution methodology, also is an exploratory try to do creative computing research under cloud environment.", "keywords": "creative application;creative computing;software evolution;cloud computing", "title": "an approach of creative application evolution on cloud computing platform"} {"abstract": "Performance optimization is a critical step in the design of integrated circuits. Rapid advances in very large scale integration (VLSI) technology have enabled shrinking feature sizes, wire widths, and wire spacings, making the effects of coupling capacitance more apparent. As signals switch faster, noise due to coupling between neighboring wires becomes more pronounced. Changing the relative signal arrival times (RSATs) alters the victim line delay due to the varying coupling noise on the victim line. The authors propose a sensitivity-based method to analyze delay uncertainties of coupled interconnects due to uncertain signal arrival times at its inputs. Compared to existing methods of analyzing delay uncertainties of coupled interconnects, the simulation results show that the proposed method strikes a good balance between model accuracy and complexity compared to the existing approaches.", "keywords": "coupled interconnects;delay changes;sensitivity;signal arrival times;statistical timing", "title": "A sensitivity-based approach to analyzing signal delay uncertainty of coupled interconnects"} {"abstract": "Problem formulation: given a number of possible pointed targets, compute the target that the user points to. Estimate head pose by visually tracking the off-plane rotations of the face. Recognize two different hand pointing gestures (point left and point right). Model the problem using the DempsterShafer theory of evidence. Use Demspsters rule of combination to fuse information and derive the pointed target.", "keywords": "humanrobot interaction;computer vision;gesture recognition;pointing gestures;head pose estimation", "title": "Visual estimation of pointed targets for robot guidance via fusion of face pose and hand orientation"} {"abstract": "In this paper we investigate the sensitivity of RTN noise spectra to statistical variability alone and in combination with variability in the traps properties, such as trap level and trap activation energy. By means of 3D statistical simulation, we demonstrate the latter to be mostly responsible for noise density spectra dispersion, due to its large impact on the RTN characteristic time. As a result FinFETs devices are shown to be slightly more sensitive to RTN than FDSOI devices. In comparison bulk MOSFETs are strongly disadvantaged by the statistical variability associated with high channel doping.", "keywords": "random telegraph noise;simulation;finfet;statistical variability;reliability", "title": "RTN distribution comparison for bulk, FDSOI and FinFETs devices"} {"abstract": "Many multimedia content-based retrieval systems allow query formulation with user setting of relative importance of features (e.g., color, texture, shape, etc) to mimic the user's perception of similarity. However, the systems do not modify their similarity matching functions, which are defined during the system development. In this paper, we present a neural network-based learning algorithm for adapting similarity matching function toward the user's query preference based on his/her relevance feedback. The relevance feedback is given as ranking errors (misranks) between the retrieved and desired lists of multimedia objects. The algorithm is demonstrated for facial image retrieval using the NIST Mugshot Identification Database with encouraging results.", "keywords": "content-based retrieval;image retrieval;multimedia databases;learning;ranking;similarity matching;relevance feedback", "title": "Learning similarity matching in multimedia content-based retrieval"} {"abstract": "In this paper, an attempt has been made by incorporating some special features in the conventional particle swarm optimization (PSO) technique for decentralized swarm agents. The modified particle swarm algorithm ( MPSA) for the self-organization of decentralized swarm agents is proposed and studied. In the MPSA, the update rule of the best agent in swarm is based on a proportional control concept and the objective value of each agent is evaluated on-line. In this scheme, each agent self-organizes to flock to the best agent in swarm and migrate to a moving target while avoiding collision between the agent and the nearest obstacle/agent. To analyze the dynamics of the MPSA, stability analysis is carried out on the basis of the eigenvalue analysis for the time-varying discrete system. Moreover, a guideline about how to tune the MPSA's parameters is proposed. The simulation results have shown that the proposed scheme effectively constructs a self-organized swarm system in the capability of flocking and migration.", "keywords": "decentralized swarm systems;particle swarm optimization;self-organization", "title": "Self-organization of decentralized swarm agents based on modified particle swarm algorithm"} {"abstract": "This paper examine the Euler-Lagrange equations for the solution of the large deformation diffeomorphic metric mapping problem studied in Dupuis et al. (1998) and Trouve (1995) in which two images 10, 1, are given and connected via the diffeomorphic change of coordinates I-0 o phi(-1) = I, where p = 01 is the end point at t = 1 of curve phi(t), t is an element of [0, 1] satisfying (phi)over dot(t) = v(t)(phi(t)), t is an element of [0, 1] with phi(0) = id. The variational problem takes the form [GRAPHICS] where parallel tov(t)parallel to(V) is an appropriate Sobolev norm on the velocity field v(t)(.), and the second term enforces matching of the images with parallel to.parallel to(L2) representing the squared-error norm. In this paper we derive the Euler-Lagrange equations characterizing the minimizing vector fields vt(,) t is an element of [0, 1] assuming sufficient smoothness of the norm to guarantee existence of solutions in the space of diffeomorphisms. We describe the implementation of the Euler equations using semi-lagrangian method of computing particle flows and show the solutions for various examples. As well, we compute the metric distance on several anatomical configurations as measured by integral(0)(1) parallel tov(t)parallel to(v)dt on the geodesic shortest paths.", "keywords": "computational anatomy;euler-lagrange equation;variational optimization;deformable template;metrics", "title": "Computing large deformation metric mappings via geodesic flows of diffeomorphisms"} {"abstract": "Introduction of Graphical Processing Units (GPUs) and computing using GPUs in recent years opened possibilities for simple parallelization of programs. In this update, we present the modernized version of program ARVO [J. Busa, J. Dzurina, E. Hayryan, S. Hayryan, C-K. Hu, J. Plavka, I. Pokorny, J. Skivanek, M.-C. Wu, Comput. Phys. Comm. 165 (2005) 59]. The whole package has been rewritten in the C language and parallelized using OpenCL. Some new tricks have been added to the algorithm in order to save memory much needed for efficient usage of graphical cards. A new tool called 'input_structure' was added for conversion of pdb files into files suitable for work with the C and OpenCL version of ARVO. New version program summary Program title: ARVO-CL Catalog identifier: ADUL_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADUL_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 11834 No. of bytes in distributed program, including test data, etc.: 182528 Distribution format: tar.gz Programming language: C. OpenCL. Computer: PC Pentium; SPP'2000. Operating system: All OpenCL capable systems. Has the code been vectorized or parallelized?: Parallelized using GPUs. A serial version (non GPU) is also included in the package. Classification: 3. External routines: cl.hpp (http://www.khronos.org/registry/cl/api/1.1/cl.hpp) Catalog identifier of previous version: ADUL_v1_0 Journal reference of previous version: Comput. Phys. Comm. 165(2005)59 Does the new version supercede the previous version?: Yes", "keywords": "arvo;proteins;solvent accessible area;excluded volume;stereographic projection;opencl package", "title": "ARVO-CL: The OpenCL version of the ARVO package - An efficient tool for computing the accessible surface area and the excluded volume of proteins via analytical equations"} {"abstract": "This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.", "keywords": "multiple-input multiple-output ;near-field imaging;range migration;sparse array", "title": "Three-Dimensional Near-Field MIMO Array Imaging Using Range Migration Techniques"} {"abstract": "A novel imprinting process has been developed for the use of resist pattern transfer on flexible transparent plastic substrates. The polymer resist was first spin-coated on the mold, which was treated with a release agent. After softbaking, the resist layer was attached to a plastic substrate coated with an adhesive. The patterns were completely transferred to the substrate after removing the mold. Using this method, we were able to obtain the desired patterns on the plastic substrate without heating the substrate, which could deform the substrate.", "keywords": "imprint lithography;flexible plastic substrate;pmma", "title": "Imprint lithography for flexible transparent plastic substrates"} {"abstract": "Data generation increases at highly dynamic rates, making its storage, processing, and update costs at one central location excessive. The P2P paradigm emerges as a powerful model for organizing and searching large data repositories distributed over independent sources. Advanced query operators, such as skyline queries, are necessary in order to help users handle the huge amount of available data. A skyline query retrieves the set of nondominated data points in a multidimensional data set. Skyline query processing in P2P networks poses inherent challenges and demands nontraditional techniques, due to the distribution of content and the lack of global knowledge. Relying on a superpeer architecture, we propose a threshold-based algorithm, called SKYPEER and its variants, for efficient computation of skyline points in arbitrary subspaces, while reducing both computational time and volume of transmitted data. Furthermore, we address the problem of routing skyline queries over the superpeer network and we propose an efficient routing mechanism, namely SKYPEER(+), which further improves the performance by reducing the number of contacted superpeers. Finally, we provide an extensive experimental evaluation showing that our approach performs efficiently and provides a viable solution when a large degree of distribution is required.", "keywords": "skyline queries;peer-to-peer systems;routing indexes", "title": "Efficient Routing of Subspace Skyline Queries over Highly Distributed Data"} {"abstract": "Simultaneously to improvements of computer performance and of availability of memory not only the resolution of meteorological models of atmospheric currents has been refined but also the accuracy of the necessary physical approximations has been improved more and more. Now full elastic models are developed which describe also sound waves, although sound processes are not supposed to be relevant for atmospheric flow phenomena. But the full set of the elastic Navier-Stokes equations has a quite simple structure in comparison to sound proved systems like \"anelastically\" approximated models, so that the corresponding numerical models can be implemented on parallel computer systems without too much efforts. This has been considered by the redesign of the \"Karlsruhe Atmospheric Mesoscale Model\" (KAMM) for parallel processing. The new full elastic version of this model is written in FORTRAN-90. The necessary communication operations are gathered into few functions of a communication library, which is designed for different computer architectures, for massive parallel systems, for parallel vector computers requiring long vectors, but also for mono processors. ", "keywords": "navier-stokes equation;elastic model;regional climatology;communication library;massive parallel systems;vector computers;benchmark", "title": "Parallel processing in regional climatology: The parallel version of the \"Karlsruhe Atmospheric Mesoscale Model\" (KAMM)"} {"abstract": "Mild variations in organic matrices, which are investigated in this work, are caused by alterations in X-ray Raman scattering. The multivariate approaches, principal component analysis (PCA) and hierarchical cluster analysis (HCA), are applied to visualize these effects. Conventional energy-dispersive X-ray fluorescence equipment is used, where organic compounds produce intense scattering of the X-ray source. X-ray Raman processes, before obtained only for solid samples using synchrotron radiation, are indirectly visualized here through PCA scores and HCA cluster analysis, since they alter the Compton and Rayleigh scattering. As a result, their influences can be seen in known sample characteristics, as those associated with gender and melanin in dog hairs, and the differentiation in coconut varieties. Chemometrics has shown that, despite their complexity, natural samples can be easily classified. ", "keywords": "principal component analysis ;hierarchical cluster analysis ;natural sample differentiation;x-ray raman scatter spectrometry ;complex organic mixtures", "title": "X-ray scattering processes and chemometrics for differentiating complex samples using conventional EDXRF equipment"} {"abstract": "We present a deterministic algorithm to compute the Reeb graph of a PL real-valued function on a simplicial complex in time, where is the size of the 2-skeleton. The problem can be solved using dynamic graph connectivity. We obtain the running time by using offline graph connectivity which assumes that the deletion time of every arc inserted is known at the time of insertion. The algorithm is implemented and experimental results are given. In addition, we reduce the offline graph connectivity problem to computing the Reeb graph.", "keywords": "algorithms;reeb graph;pl topology;graph connectivity", "title": "A Deterministic Time Algorithm for the Reeb Graph"} {"abstract": "Due to rapid increases in printed circuit board (PCB) complexity and lack of research progresses in PCB routing algorithms over the years, routing has become a bottleneck in overall circuit board design time. Today, a high-end PCB typically takes significant tedious manual efforts to complete the wiring and this problem will only get worse for future generations of PCBs. In this talk, we present some of our recent research results on this problem.", "keywords": "routing;pcb;algorithms", "title": "advances in pcb routing"} {"abstract": "Does a firm get any extra value from investing resources in sponsoring its own virtual community above and beyond the value that could be created for the firm, indirectly, via customer-initiated communities? If so, what explains the extra value derived from a firm-sponsored virtual community and how might this understanding inform managers about appropriate strategies for leveraging virtual communities as part of a value-creating strategy for the firm? We test two models of virtual community to help shed light on the answers to these questions. We hypothesize that in customer-initiated virtual communities, three attributes of member-generated information (MGI) drive value, while in firm-sponsored virtual communities, a sponsoring firm's efforts, as well as MGI, drive value. Drawing on information search and processing theories, and developing new measures of three attributes of MGI (consensus, consistency, and distinctiveness), we surveyed 465 consumers across numerous communities. We find that value can emerge via both models, but that in a firm-sponsored model, a sponsor's efforts are more powerful than MGI and have a positive, direct effect on the trust-building process. Our results suggest a continuum of value creation whereby firms extract greater value as they migrate toward the firm-sponsored model.", "keywords": "attribution theory;co-creation;online communities;online trust;user-generated content;virtual communities", "title": "A Test of Two Models of Value Creation in Virtual Communities"} {"abstract": "We present a method for automatically selecting optimal implementations of sparse matrix-vector operations. Our software \"AcCELS\" (Accelerated Compress-storage Elements for Linear Solvers) involves a setup phase that probes machine characteristics, and a run-time phase where stored characteristics are combined with a measure of the actual sparse matrix to find the optimal kernel implementation. We present a performance model that is shown to be accurate over a large range of matrices.", "keywords": "optimization;sparse;matrix-vector product;blocking;self-adaptivity", "title": "Performance optimization and modeling of blocked sparse kernels"} {"abstract": "Multi-level nonlinear mixed effects (ML-NLME) models have received a great deal of attention in recent years because of the flexibility they offer in handling the repeated-measures data arising from various disciplines. In this study, we propose both maximum likelihood and restricted maximum likelihood estimations of ML-NLME models with two-level random effects, using first order conditional expansion (FOCE) and the expectationmaximization (EM) algorithm. The FOCEEM algorithm was compared with the most popular Lindstrom and Bates (LB) method in terms of computational and statistical properties. Basal area growth series data measured from Chinese fir (Cunninghamia lanceolata) experimental stands and simulated data were used for evaluation. The FOCEEM and LB algorithms given the same parameter estimates and fit statistics for models that converged by both. However, FOCEEM converged for all the models, while LB did not, especially for the models in which two-level random effects are simultaneously considered in several base parameters to account for between-group variation. We recommend the use of FOCEEM in ML-NLME models, particularly when convergence is a concern in model selection.", "keywords": "cunninghamia lanceolata;expectationmaximization algorithm;first order conditional expansion;lindstrom and bates algorithm;simulated data;two-level nonlinear mixed effects models", "title": "Parameter estimation of two-level nonlinear mixed effects models using first order conditional linearization and the EM algorithm"} {"abstract": "The paper presents an Arlequin based multi-scale method for studying problems related to the mechanical behaviour of sandwich composite structures. Towards this end, different models are mixed and glued to each other. Several coupling operators are tested in order to assess the usefulness of the proposed approach. A new coupling operator is proposed and tested on the different glued Arlequin zones. A freeclamped sandwich beam with soft core undergoing a concentrated effort on the free edge is used as a typical example (benchmark) in the validation procedure. Numerical simulations were conducted as the preliminary evaluation of the various coupling operators and the discrepancies between local and global models in the gluing zone have been addressed with sufficient care.", "keywords": "arlequin;multi-scale;sandwich;local effects;finite element", "title": "Multi-scale modelling of sandwich structures using the Arlequin method Part I: Linear modelling"} {"abstract": "The large number of organizations developing executive information systems (EISs) highlights the importance of understanding why executives use these systems. This survey investigated how ease of use, the number of features, and support staff characteristics are related to EIS acceptance. Acceptance was measured by the percentage of the targeted users who incorporate the EIS into their daily routine. High usage was not associated with ease of use, a large number of features, or the staff being physically close to the users. However, rapid development time was positively correlated with acceptance. Higher numbers of available features were associated with larger support staffs and larger user groups. The number of users was positively correlated with both staff size and EIS age. Existing EISs place a stronger emphasis on reporting internal rather than external data.", "keywords": "executive information systems;information systems features;information systems support", "title": "DETERMINATES OF EIS ACCEPTANCE"} {"abstract": "This paper presents a non-blind watermarking technique that is robust to non-linear geometric distortion attacks. This is one of the most challenging problems for copyright protection of digital content because it is difficult to estimate the distortion parameters for the embedded blocks. In our proposed scheme, the location of the blocks are recorded by the translation parameters from multiple Scale Invariant Feature Transform (SIFT) feature points. This method is based on two assumptions: SIFT features are robust to non-linear geometric distortion and even such non-linear distortion can be regarded as \"linear\" distortion in local regions. We conducted experiments using 149,800 images (7 standard images and 100 images downloaded from Flickr, 10 different messages, 10 different embedding block patterns, and 14 attacks). The results show that the watermark detection performance is drastically improved, while the baseline method can achieve only chance level accuracy.", "keywords": "watermarking;scale invariant feature transform ;non-linear geometric distortion attacks", "title": "SIFT-Based Non-blind Watermarking Robust to Non-linear Geometrical Distortions"} {"abstract": "We present the concept of ZOOM NAVIGATION, a new interaction paradigm to cope with visualization and navigation problems as found in large information and application spaces. It is based on the pluggable zoom , an object-oriented component derived from the variable zoom fisheye algorithm.Working with a limited screen space we apply a Degree-of-interest (DOI) function to guide the level of detail used in presenting information. Furthermore we determine the user's information and navigation needs by analysing the interaction history. This leads to the definition of the aspect-of-interest (AOI) function. The AOI is evaluated in order to choose one of the several information aspects , under which an item can be studied. This allows us to change navigational affordance and thereby enhance navigation.In this paper we describe the ideas behind the pluggable zoom and the definition of DOI and AOI functions. The application of these functions is demonstrated within two case studies, the ZOOM ILLUSTRATOR and the ZOOM NAVIGATOR. We discuss our experience with these implemented systems.", "keywords": "zooming interfaces;zoom navigation;screen layout;information navigation;fisheye display;human-computer interfaces;detail + context technique", "title": "zoom navigation exploring large information and application spaces"} {"abstract": "A wireless ad hoc network does not have an infrastructure, and thus, needs the cooperation of nodes in forwarding other nodes' packets. Reputation system is an effective approach to give nodes incentives to cooperate in packet forwarding. However, existing reputation systems either lack rigorous analysis, or have analysis in unrealistic models. In this paper, we propose FITS, the first reputation system that has rigorous analysis and guaranteed incentive compatibility in a practical model. FITS has two schemes: the first scheme is very simple, but needs a Perceived Probability Assumption (PPA); the second scheme uses more sophisticated techniques to remove the need for PPA. We show that both of these two FITS schemes have a subgame perfect Nash equilibrium in which the packet forwarding probability of every node is one. Experimental results verify that FITS provides strong incentives for nodes to cooperate.", "keywords": "keywords ad hoc networks;incentive compatibility;routing;packet forwarding", "title": "FITS: A Finite-Time Reputation System for Cooperation in Wireless Ad Hoc Networks"} {"abstract": "Concurrent and retrospective verbal protocol methods were used to collect thoughts from 18 participants during a manual handling task involving the repeated transfer of loads between locations at two tables. The effectiveness of qualitative and quantitative methods of analysing the reported information was tested in the study. A simple taxonomy was developed to investigate the content of the reports (including reports on postures and loads) and determine how the participants approached the task (whether they made plans, described actions or evaluated their completion of the task). References to posture were obtained in the verbal protocol reports, indicating that the participants had some awareness of their postures during parts of the task. There were similarities in the content of the concurrent and retrospective reports, but there were differences in the amount of detail between the methods and differences in the way the reports were constructed. There could be some scope for developing the quantitative analysis of the frequencies of references to classes of information, though this can only be recommended for concurrent reports on tasks of short duration. The analyses of qualitative data gave a deeper insight into the reports, such as identifying factors that can be important when planning to handle a load, or illustrating how participants can change their focus of attention periodically throughout the task. The relative strengths of the concurrent and retrospective methods are described, along with ideas for improving the quality of information collected in future studies. A number of potential problems with the interpretation of the reported information are explained.", "keywords": "self reports;verbal protocol methods;manual handling tasks", "title": "Developing a verbal protocol method for collecting and analysing reports of workers thoughts during manual handling tasks"} {"abstract": "Tree parsing as supported by code generator generators like BEG, burg, iburg, lburg and ml-burg is a popular instruction selection method. There are two existing approaches for implementing tree parsing: dynamic programming, and tree-parsing automata; each approach has its advantages and disadvantages. We propose a new implementation approach that combines the advantages of both existing approaches: we start out with dynamic programming at compile time, but at every step we generate a state for a tree-parsing automaton, which is used the next time a tree matching the state is found, turning the instruction selector into a fast tree-parsing automaton. We have implemented this approach in the Gforth code generator. The implementation required little effort and reduced the startup time of Gforth by up to a factor of 2.5.", "keywords": "algorithms;performance;instruction selection;tree parsing;dynamic programming;automaton;lazy", "title": "Fast and flexible instruction selection with on-demand tree-parsing automata"} {"abstract": "The emerging context of e-Science imposes new scenarios and requirements for digital preservation. In particular, the data must be reliably stored, for which redundancy is a key strategy. But managing redundancy must take into account the potential failure of component. Considering that correlated failures can affect multiple components and potentially cause a complete loss of data, we propose an innovative solution to manage redundancy strategies in heterogeneous environments such as data grids. This solution comprises a simulator that can be used to evaluate redundancy strategies according to preservation requirements and supports the process to design the best architecture to be deployed, which can latter be used as an observer of the deployed system, supporting its monitoring and management.", "keywords": "e-science;simulation;digital libraries;data grid;digital preservation", "title": "challenges on preserving scientific data with data grids"} {"abstract": "The order in which software components are tested can have a significant impact on the number of stubs required during component integration testing. This paper presents an efficient approach that applies heuristics based on a given software component test dependency graph to automatically generate a test order that requires a (near) minimal number of test stubs. Thus, the approach reduces testing effort and cost. The paper describes the proposed approach, analyses its complexity and illustrates its use. Comparison with three well known graph-based approaches, for a real-world software application, shows that only the classic Le Traon et al.s approach and ours give an optimal number of stubs. However, experiments on randomly simulated dependency models with 100 to 10,000 components show that our approach has a significant performance advantage with a reduction in the average running time of 96.01%.", "keywords": "heuristic algorithms;software testing;component integration;directed feedback vertex-set problem", "title": "automated test order generation for software component integration testing"} {"abstract": "The Schelling model of 1971 is a complicated version of a square-lattice Ising model at zero temperature, to explain urban segregation, based on the neighbor preferences of the residents, without external reasons. Various versions between Ising and Schelling models give about the same results. Inhomogeneous \"temperatures\" T do not change the results much, while a feedback between segregation and T leads to a self-organization of an average T.", "keywords": "urban segregation;feedback;phase transition;randomness", "title": "Inhomogeneous and self-organized temperature in Schelling-Ising model"} {"abstract": "The question of the \"manner in which an existing software architecture affects requirements decision-making\" is considered important in the research community; however, to our knowledge, this issue has not been scientifically explored. We do not know, for example, the characteristics of such architectural effects. This paper describes an exploratory study on this question. Specific types of architectural effects on requirements decisions are identified, as are different aspects of the architecture together with the extent of their effects. This paper gives quantitative measures and qualitative interpretation of the findings. The understanding gained from this study has several implications in the areas of: project planning and risk management, requirements engineering (RE) and software architecture (SA) technology, architecture evolution, tighter integration of RE and SA processes, and middleware in architectures. Furthermore, we describe several new hypotheses that have emerged from this study, that provide grounds for future empirical work. This study involved six RE teams (of university students), whose task was to elicit new requirements for upgrading a pre-existing banking software infrastructure. The data collected was based on a new meta-model for requirements decisions, which is a bi-product of this study. ", "keywords": "software architecture;requirements engineering;empirical study;software quality;process improvement;quantitative and qualitative research;architecture and requirements technology", "title": "An exploratory study of architectural effects on requirements decisions"} {"abstract": "Spatial optical beam-forming network (OBFN) is a superior structure than traditional ones in bandwidth adaptability, system complexity, and so forth. Compared with conventional beam-forming network, the output signal model of OBFN is different, making the previous direction of arrival (DOA) estimation methods improper for this structure. At present, DOA estimation method for this structure has not been sufficiently explored and there is no efficient algorithm. In this paper, the observation model of the network is established first, and then a new DOA estimation method is proposed. The new method makes use of the amplitude distribution of the fiber array to achieve direction finding. Sufficient numerical simulations are carried out to demonstrate the feasibility and efficiency of the proposed algorithm.", "keywords": "spatial optical beam-forming network;doa estimation;fiber array;fourier transform", "title": "A direction of arrival estimation method for spatial optical beam-forming network"} {"abstract": "We consider the problem of approximation of an operator by information described by n real characteristics in the case when this information is fuzzy. We develop the well-known idea of an optimal error method of approximation for this case. It is a method whose error is the infimum of the errors of all methods for a given problem characterized by fuzzy numbers in this case. We generalize the concept of central algorithms, which are always optimal error algorithms and in the crisp case are useful both in practice and in theory. In order to do this we define the centre of an L-fuzzy subset of a normed space. The introduced concepts allow us to describe optimal methods of approximation for linear problems using balanced fuzzy information. ", "keywords": "l-fuzzy number;fuzzy information;central algorithm of approximation", "title": "On central algorithms of approximation under fuzzy information"} {"abstract": "A web tool that provides currents and/or sea surface elevation in the Gulf of California is presented. The above variables are reconstructed from harmonic constants obtained from harmonic analyses of time series produced by a 3D baroclinic numerical model of the Gulf. The numerical model was forced (1) at the Gulf's mouth by the tides and the hydrographic variability of the Pacific Ocean (at semiannual and annual frequencies), and (2) at the Gulf's surface by winds, heat and fresh water fluxes (also at the semiannual and annual frequencies). The response to these forcings results in motions with time scales limited to semidiurnal and diurnal, fortnightly and monthly (due to nonlinear interactions of the tidal components), and semiannual and annual frequencies (due to the nontidal forcing).", "keywords": "prediction;sea level;currents;gulf of california;tides;hamsom", "title": "Prediction of currents and sea surface elevation in the Gulf of California from tidal to seasonal scales"} {"abstract": "Enlightened by the Caputo type of fractional derivative, here we bring forth a concept of memory-dependent derivative, which is simply defined in an integral form of a common derivative with a kernel function on a slipping interval. In case the time delay tends to zero it tends to the common derivative. High order derivatives also accord with the first order one. Comparatively, the form of kernel function for the fractional type is fixed, yet that of the memory-dependent type can be chosen freely according to the necessity of applications. So this kind of definition is better than the fractional one for reflecting the memory effect (instantaneous change rate depends on the past state). Its definition is more intuitionistic for understanding the physical meaning and the corresponding memory-dependent differential equation has more expressive force.", "keywords": "memory-dependent derivative ;memory-dependent differential equation ;fractional differential equation ;caputo derivative;time delay", "title": "Surpassing the fractional derivative: Concept of the memory-dependent derivative"} {"abstract": "In this paper we provide a systematic way to construct the robust counterpart of a nonlinear uncertain inequality that is concave in the uncertain parameters. We use convex analysis (support functions, conjugate functions, Fenchel duality) and conic duality in order to convert the robust counterpart into an explicit and computationally tractable set of constraints. It turns out that to do so one has to calculate the support function of the uncertainty set and the concave conjugate of the nonlinear constraint function. Conveniently, these two computations are completely independent. This approach has several advantages. First, it provides an easy structured way to construct the robust counterpart both for linear and nonlinear inequalities. Second, it shows that for new classes of uncertainty regions and for new classes of nonlinear optimization problems tractable counterparts can be derived. We also study some cases where the inequality is nonconcave in the uncertain parameters.", "keywords": "fenchel duality;robust counterpart;nonlinear inequality;robust optimization;support functions;", "title": "Deriving robust counterparts of nonlinear uncertain inequalities"} {"abstract": "In this correspondence, the low-weight terms of the weight distribution of the block code obtained by terminating a convolutional code after x information blocks are expressed as a function of x. It is shown that this function is linear in x for codes with noncatastrophic encoders, but quadratic in x for codes with catastrophic encoders, These results are useful to explain the poor performance of convolutional codes with a catastrophic encoder at low-to-medium signal-to-noise ratios.", "keywords": "block codes;convolutional codes;soft-decision decoding;viterbi decoding;weight distribution", "title": "On the weight distribution of terminated convolutional codes"} {"abstract": "An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled CDA+GrAF. We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and translating annotations between different NLP applications, and eventually plug-and-play of different modules in NLP applications.", "keywords": "natural language processing ;medical informatics ;data model;information model;hl7 clinical document architecture;iso graph annotation format", "title": "Common data model for natural language processing based on two existing standard information models: CDA+GrAF"} {"abstract": "The well-known uniform error property for signal constellations and codes is extended to encompass information bits. We introduce a class of binary labelings for signal constellations, called bit geometrically uniform (BGU) labelings, for which the uniform bit error property holds, i.e., the bit error probability does not depend on the transmitted signal. Strong connections between the symmetries of constellations and binary Hamming spaces are involved. For block-coded modulation (BCM) and trellis-coded modulation (TCM) Euclidean-space codes, BGU encoders are introduced and studied. The properties of BGU encoders prove quite useful for the analysis and design of codes aimed at minimizing the bit, rather than symbol, error probability. Applications to the analysis and the design of serially concatenated trellis codes are presented, together with a case study which realizes a spectral efficiency of 2 b/s/Hz.", "keywords": "coding;concatenated codes;constellations;encoding;labeling;trellis codes;uniform error property", "title": "Labelings and encoders with the uniform bit error property with applications to serially concatenated trellis codes"} {"abstract": "In this paper, we perform a stochastic analysis of the packet-pair technique, which is a widely used method for estimating the network bandwidth in an end-to-end manner. There has been no explicit delay model of the packet-pair technique primarily because the stochastic behavior of a packet pair has not been fully understood. Our analysis is based on a novel insight that the transient analysis of the G/D/1 system can accurately describe the behavior of a packet pair, providing an explicit stochastic model. We first investigate a single-hop case and derive an analytical relationship between the input and the output probing gaps of a packet pair. Using this single-hop model, we provide a multi-hop model under an assumption of a single tight link. Our model shows the following two important features of the packet-pair technique: (i) The difference between the proposed model and the previous fluid model becomes significant when the input probing gap is around the characteristic value. (ii) The available bandwidth of any link after the tight link is not observable. We verify our model via ns-2 simulations and empirical results. We give a discussion on recent packet-pair models in relation to the proposed model and show that most of them can be regarded as special cases of the proposed model.", "keywords": "packet-pair technique;bandwidth estimation;m/d/1 queue;transient analysis", "title": "Stochastic analysis of packet-pair probing for network bandwidth estimation"} {"abstract": "A teleoperation system for a hydro-static transmission (HST) drive crawler-type robotic vehicle is described in this paper. The system was developed to satisfy the needs of various farm operations and teleoperation in unknown agricultural fields. The controller has a layered architecture and supports two degrees of cooperation between the operator and robot, direct and supervisory control. The vehicle can travel autonomously by using an RTK-GPS and a fiber-optic gyroscope during supervisory control, and the operator interface also provides a field navigator based on Google Map technology. The vehicle's position and heading direction was capable of 1Hz update using precise satellite image maps. The results of field tests using direct control showed that it is difficult for the operator to control the movement of the vehicle along the target lines. On the other hand, the vehicle could travel in a straight line with a maximum lateral error of 0.3m by using supervisory control.", "keywords": "agriculture;computer vision;autonomous mobile robot;communication;vehicle", "title": "Development of a teleoperation system for agricultural vehicles"} {"abstract": "The audio channel conveys rich clues for content-based multimedia indexing. Interesting audio analysis includes, besides widely known speech recognition and speaker identification problems, speech/music segmentation, speaker gender detection, special effect recognition such as gun shots or car pursuit, and so on. All these problems can be considered as an audio classification problem which needs to generate a label from low audio signal analysis. While most audio analysis techniques in the literature are problem specific, we propose in this paper a general framework for audio classification. The proposed technique uses a perceptually motivated model of the human perception of audio classes in the sense that it makes a judicious use of certain psychophysical results and relies on a neural network for classification. In order to assess the effectiveness of the proposed approach, large experiments on several audio classification problems have been carried out, including speech/music discrimination in Radio/TV programs, gender recognition on a subset of the switchboard database, highlights detection in sports videos, and musical genre recognition. The classification accuracies of the proposed technique are comparable to those obtained by problem specific techniques while offering the basis of a general approach for audio classification.", "keywords": "audio classification;gender identification;music genre recognition;highlights detection;perceptually motivated features;content-based audio indexing;piecewise gaussian modelling", "title": "A general audio classifier based on human perception motivated model"} {"abstract": "Quality of service (QoS) routing is known to be an NP-hard problem in case of two or more additive constraints, and several exact algorithms and heuristics have been proposed to address this issue. In this paper, we consider a particular two-constrained quality of service routing problem maximizing path stability with a limited path length in the quest of improving routability in dynamic multi-hop mobile wireless ad hoc networks. First, we propose a novel exact algorithm to solve the optimal weight-constrained path problem. We instantiate our algorithm to solve the most stable path not exceeding a certain number of hops, in polynomial time. This algorithm is then applied to the practical case of proactive routing in dynamic multi-hop wireless ad hoc networks. In these networks, an adequate compromise between route stability and its length in hops is essential for appropriately mitigating the impact of the network dynamics on the validity of established routes. Secondly, we set up a common framework for the comparison between three families of proactive routing: the shortest path-based routing, the most stable path-based routing and our proposed most stable constrained path routing. We show then through extensive simulations that routing based on our proposed algorithm selects appropriate stable paths yielding a very high routability with an average path length just above that of the shortest paths.", "keywords": "manets;constrained-based routing;quality of service routing;stability constraint;np-hard problems;polynomial algorithms", "title": "Stability routing with constrained path length for improved routability in dynamic MANETs"} {"abstract": "SAR complex image data compression based on wavelet-quadtree is proposed. QC-DWT has achieved the-state-of-the-art performance. QC-DWT has achieved higher performance compared with wavelet-zerotree.", "keywords": "quadtree;sar complex image data compression;zerotree", "title": "SAR complex image data compression based on quadtree and zerotree Coding in Discrete Wavelet Transform Domain: A Comparative Study"} {"abstract": "Tree-walking automata (TWAs) recently received new attention in the fields of formal languages and databases. To achieve a better understanding of their expressiveness, we characterize them in terms of transitive closure logic formulas in normal form. It is conjectured by Engelfriet and Hoogeboom that TWAs cannot define all regular tree languages, or equivalently, all of monadic second-order logic. We prove this conjecture for a restricted, but powerful, class of TWAs. In particular, we show that 1-bounded TWAs, that is TWAs that are only allowed to traverse every edge of the input tree at most once in every direction, cannot define all regular languages. We then extend this result to a class of TWAs that can simulate first-order logic (FO) and is capable of expressing properties not definable in FO extended with regular path expressions; the latter logic being a valid abstraction of current query languages for XML and semistructured data.", "keywords": "tree-walking automata;regular tree languages;logic;formal languages.", "title": "On the power of tree-walking automata"} {"abstract": "We propose a type and effect system for authentication protocols built upon a tagging scheme that formalizes the intended semantics of ciphertexts. The main result is that the validation of each component in isolation is provably sound and fully compositional : if all the protocol participants are independently validated, then the protocol as a whole guarantees authentication in the presence of Dolev-Yao intruders. The highly compositional nature of the analysis makes it suitable for multi-protocol systems, where different protocols might be executed concurrently.", "keywords": "authentication;static analysis;process calculi", "title": "authenticity by tagging and typing"} {"abstract": "The reaction of three mesylates of furanoderivatives in pyridine is presented at the DFT. All the structures were fully optimized in the gas phase, in chloroform and water. The calculations revealed the barrier height increasing order as follows: 1>2>3. MPW1K/6-31+G** level activation barriers are higher than those from B3LYP/6-31+G**. The furanoid ring conformations are close to E0 or 0E.", "keywords": "furanoid ring;conformation;dft calculations;nbo;quaternary ammonium salts", "title": "The conformational behavior, geometry and energy parameters of Menshutkin-like reaction of O-isopropylidene-protected glycofuranoid mesylates in view of DFT calculations"} {"abstract": "In migrating a legacy relational database system to the object-oriented (OO) platform, when database migration completes, application modules are to be migrated, where embedded relational database operations are mapped into their OO correspondents. In this paper we study mapping relational update operations to their OO equivalents, which include UPDATE1, INSERT and DELETE operations. Relational update operation translation from relational to OO faces the touchy problem of transformation from a value-based relationship model to a reference-based model and maintaining the relational integrity constraints. Moreover, with a relational database where inheritance is expressed as attribute value subset relationship, changing of some attribute values may lead to the change of the position of an object in the class inheritance hierarchy, which we call object migration. Considering all these aspects, algorithms are given mapping relational UPDATE, INSERT and DELETE operations to their OO correspondents. Our work emphasize in examining the differences in the representation of the source schema's semantics resulting from the translation process, as well as differences in the inherent semantics of the two models.", "keywords": "relational model;object-oriented model;query translation;update translation", "title": "Translating update operations from relational to object-oriented databases"} {"abstract": "An attribute based encryption scheme (ABE) is a cryptographic primitive in which every user is identified by a set of attributes, and some function of these attributes is used to determine the ability to decrypt each ciphertext. Chase proposed the first multi authority ABE scheme which requires a fully trusted central authority who has the ability to decrypt each ciphertext in the system. This central authority would endanger the whole system if it is corrupted. This paper provides a threshold multi authority fuzzy identity based encryption (MA-FIBE) scheme without a central authority for the first time. An encrypter can encrypt a message such that a user could only decrypt if he has at least d(k) of the given attributes about the message for at least t + 1, t <= n/2 honest authorities of all the n attribute authorities in the proposed scheme. This paper considers a stronger adversary model in the sense that the corrupted authorities are allowed to distribute incorrect secret keys to the users. The security proof is based on the secrecy of the underlying distributed key generation protocol and joint zero secret sharing protocol and the standard decisional bilinear Diffie-Hellman assumption. The proposed MA-FIBE could be extended to the threshold multi authority attribute based encryption (MA-ABE) scheme, and both key policy based and ciphertext policy based MA-ABE schemes without a central authority are presented in this paper. Moreover, several other extensions, such as a proactive large universe MA-ABE scheme, are also provided in this paper. ", "keywords": "threshold multi authority abe;without a central authority", "title": "Secure threshold multi authority attribute based encryption without a central authority"} {"abstract": "The Segmentation According to Natural Examples (SANE) algorithm learns to segment objects in static images from video training data. SANE uses background subtraction to find the segmentation of moving objects in videos. This provides object segmentation information for each video frame. The collection of frames and segmentations forms a training set that SANE uses to learn the image and shape properties of the observed motion boundaries. When presented with new static images, the trained model infers segmentations similar to the observed motion segmentations. SANE is a general method for learning environment-specific segmentation models. Because it can automatically generate training data from video, it can adapt to a new environment and new objects with relative ease, an advantage over untrained segmentation methods or those that require human-labeled training data. By using the local shape information in the training data, it outperforms a trained local boundary detector. Its performance is competitive with a trained top-down segmentation algorithm that uses global shape. The shape information it learns from one class of objects can assist the segmentation of other classes.", "keywords": "segmentation;machine learning;motion;computer vision;markov random field", "title": "Segmentation According to Natural Examples: Learning Static Segmentation from Motion Segmentation"} {"abstract": "This article is concerned with new robust stability conditions and robust stabilisation method for a discrete-time system with time-delay and time-varying structured uncertainties that come into state and input matrices. An improved approach to obtain new robust stability conditions is proposed. Our approach employs a generalised Lyapunov functional combined with the parameterised model transformation method and the generalised free weighting matrix method. These generalisations lead to generalised robust stability conditions that are given in terms of linear matrix inequalities. Moreover, based on new robust stability conditions, a robust stabilisation method for uncertain discrete-time systems with time-delay is given. Numerical examples compare our robust stability conditions with some existing conditions to show the effectiveness of our approach and also illustrate the improvement of our robust stabilisation method.", "keywords": "robust stability;time-delay systems;discrete-time systems;uncertain systems;delay-dependent conditions;linear matrix inequality", "title": "New delay-dependent conditions on robust stability and stabilisation for discrete-time systems with time-delay"} {"abstract": "We present a new method, link-test, to select prostate cancer biomarkers from SELDI mass spectrometry and microarray data sets. Biomarkers selected by link-test are supported by data sets from both mRNA and protein levels, and therefore results in improved robustness. Link-test determines the level of significance of the association between a microarray marker and a specific mass spectrum marker by constructing background mass spectra distributions estimated by all human protein sequences in the SWISS-PROT database. The data set consist of both microarray and mass spectrometry data from prostate cancer patients and healthy controls. A list of statistically justified prostate cancer biomarkers is reported by link-test. Cross-validation results show high prediction accuracy using the identified biomarker panel. We also employ a text-mining approach with OMIM database to validate the cancer biomarkers. The study with link-test represents one of the first cross-platform studies of cancer biomarkers.", "keywords": "microarray;mass spectrometry;biomarker;prostate cancer;text mining", "title": "Link testA statistical method for finding prostate cancer biomarkers"} {"abstract": "This paper presents the design and evaluation of a new SRAM cell made of nine transistors (9T). The proposed 9T cell utilizes a scheme with separate read and write wordlines; it is shown that the 9T cell achieves improvements in power dissipation, performance and stability compared with previous designs (that require 10T and 8T) for low-power operation. The 9T scheme is amenable to small feature sizes as encountered in the deep sub-micron/nano ranges of CMOS technology.", "keywords": "static noise margin;sram cell;nanotechnology;leakage power;low power", "title": "a low leakage 9t sram cell for ultra-low power operation"} {"abstract": "The multimodal, multi-paradigm brain-computer interfacing (BCI) game Bacteria Hunt was used to evaluate two aspects of BCI interaction in a gaming context. One goal was to examine the effect of feedback on the ability of the user to manipulate his mental state of relaxation. This was done by having one condition in which the subject played the game with real feedback, and another with sham feedback. The feedback did not seem to affect the game experience (such as sense of control and tension) or the objective indicators of relaxation, alpha activity and heart rate. The results are discussed with regard to clinical neurofeedback studies. The second goal was to look into possible interactions between the two BCI paradigms used in the game: steady-state visually-evoked potentials (SSVEP) as an indicator of concentration, and alpha activity as a measure of relaxation. SSVEP stimulation activates the cortex and can thus block the alpha rhythm. Despite this effect, subjects were able to keep their alpha power up, in compliance with the instructed relaxation task. In addition to the main goals, a new SSVEP detection algorithm was developed and evaluated.", "keywords": "brain-computer interfacing;multimodal interaction;steady-state visually-evoked potentials;concentration;neurofeedback relaxation;game", "title": "Bacteria Hunt Evaluating multi-paradigm BCI interaction"} {"abstract": "In this paper, we develop a simplified mathematical model (a metamodel) of a simulation model of conflict, based on ideas drawn from the analysis of more general physical systems, such as found in fluid dynamics modelling. We show that there is evidence from the analysis of historical conflicts to support the kind of emergent behaviour implied by this approach. We then apply this approach to the development of a metamodel of a particular complexity based simulation model of conflict (ISAAC), developed for the US Marine Corps. The approach we have illustrated here is very generic, and is applicable to any simulation model which has complex interactions similar to those found in fluid dynamic modelling, or in simulating the emergent behaviour of large numbers of simple systems which interact with each other locally.", "keywords": "conflict;emergent behaviour;metamodel;simulation;complexity", "title": "Metamodels and emergent behaviour in models of conflict"} {"abstract": "Current peer-to-peer (P2P) systems often suffer from a large fraction of freeriders not contributing any resources to the network. Various mechanisms have been designed to overcome this problem. However, the selfish behavior of peers has aspects which go beyond resource sharing. This paper studies the effects on the topology of a P2P network if peers selfishly select the peers to connect to. In our model, a peer exploits locality properties in order to minimize the latency (or response times) of its lookup operations. At the same time, the peer aims at not having to maintain links to too many other peers in the system. By giving tight bounds on the price of anarchy, we show that the resulting topologies can be much worse than if peers collaborated. Moreover, the network may never stabilize, even in the absence of churn. Finally, we establish the complexity of Nash equilibria in our game theoretic model of P2P networks. Specifically, we prove that it is NP-hard to decide whether our game has a Nash equilibrium and can stabilize.", "keywords": "game theory;peer-to-peer;price of anarchy;np-hardness;metric spaces", "title": "Topological Implications of Selfish Neighbor Selection in Unstructured Peer-to-Peer Networks"} {"abstract": "Prolac is a new statically-typed, object-oriented language for network protocol implementation. It is designed for readability, extensibility, and real-world implementation; most previous protocol languages, in contrast, have been based on hard-to-implement theoretical models and have focused on verification. We present a working Prolac TCP implementation directly derived from 4.4BSD. Our implementation is modular---protocol processing is logically divided into minimally-interacting pieces; readable---Prolac encourages top-down structure and naming intermediate computations; and extensible---subclassing cleanly separates protocol extensions like delayed acknowledgements and slow start. The Prolac compiler uses simple global analysis to remove expensive language features like dynamic dispatch, resulting in end-to-end performance comparable to an unmodified Linux 2.0 TCP.", "keywords": "network protocol;structure;analysis;dynamic;language;performance;implementation;verification;readability;process;modular;compilation;extensibility;model;feature;global;object oriented language", "title": "a readable tcp in the prolac protocol language"} {"abstract": "While reading devices for the visually impaired have been available for many years, they are often expensive and difficult to use. The image processing required to enable the reading task is a composition of several important sub-tasks, such as image capture, image stabilization, image enhancement and page-curl dewarping region segmentation, regions grouping, and word recognition In this paper we deal with some of these sub-tasks in an effort to prototype a device (Tyflos-reader) that will read a document for a person with a visual impairment and respond to voice commands for control. Initial experimental results on a set of textbook and newspaper pages are also presented.", "keywords": "assistive devices;image super-resolution;perspective rectification;page-curl dewarping;document segmentation;voice user-interface", "title": "A WEARABLE DOCUMENT READER FOR THE VISUALLY IMPAIRED: DEWARPING AND SEGMENTATION"} {"abstract": "Social contagion depicts a process of information (e.g., fads, opinions, news) diffusion in the online social networks. A recent study reports that in a social contagion process, the probability of contagion is tightly controlled by the number of connected components in an individuals neighborhood. Such a number is termed structural diversity of an individual, and it is shown to be a key predictor in the social contagion process. Based on this, a fundamental issue in a social network is to find top-(k) users with the highest structural diversities. In this paper, we, for the first time, study the top-(k) structural diversity search problem in a large network. Specifically, we study two types of structural diversity measures, namely, component-based structural diversity measure and core-based structural diversity measure. For component-based structural diversity, we develop an effective upper bound of structural diversity for pruning the search space. The upper bound can be incrementally refined in the search process. Based on such upper bound, we propose an efficient framework for top-(k) structural diversity search. To further speed up the structural diversity evaluation in the search process, several carefully devised search strategies are proposed. We also design efficient techniques to handle frequent updates in dynamic networks and maintain the top-(k) results. We further show how the techniques proposed in component-based structural diversity measure can be extended to handle the core-based structural diversity measure. Extensive experimental studies are conducted in real-world large networks and synthetic graphs, and the results demonstrate the efficiency and effectiveness of the proposed methods.", "keywords": "structural diversity;disjoint-set forest; search;dynamic graph", "title": "Top-K structural diversity search in large networks"} {"abstract": "Recent years have witnessed the increasing efforts toward making architecture standardization for the secured wireless mobile ad hoc networks. In this scenario when a node actively utilizes the other node resources for communicating and refuses to help other nodes in their transmission or reception of data, it is called a selfish node. As the entire mobile ad hoc network (MANETs) depends on cooperation from neighboring nodes, it is very important to detect and eliminate selfish nodes from being part of the network. In this paper, token-based umpiring technique (TBUT) is proposed, where every node needs a token to participate in the network and the neighboring nodes act as umpire. This proposed TBUT is found to be very efficient with a reduced detection time and less overhead. The security analysis and experimental results have shown that TBUT is feasible for enhancing the security and network performance of real applications.", "keywords": "manet;selfish node;performance and token-based umpiring technique ", "title": "A unified approach for detecting and eliminating selfish nodes in MANETs using TBUT"} {"abstract": "The data warehouse (DW) technology is developed in order to support the integration of external data sources (EDSs) for the purpose of advanced data analysis by On-Line Analytical Processing (OLAP) applications. Since contents and structures of integrated EDSs may evolve in time, the content and schema of a DW must evolve too in order to correctly reflect the evolution of EDSs. In order to manage a DW evolution, we developed the multiversion data warehouse (MVDW) approach. In this approach, different states of a DW are represented by the sequence of persistent DW versions that correspond either to the real world state or to a simulation scenario. Typically, OLAP applications execute star queries that join multiple fact and dimension tables. An important optimization technique for this kind of queries is based on join indexes. Since in the MVDW fact and dimension data are physically distributed among multiple DW versions, standard join indexes need extensions. In this paper we present the concept of a multiversion join index (MVJI) applicable to indexing dimension and fact tables in the MVDW. The MVJI has a two-level structure, where an upper level is used for indexing attributes and a lower level is used for indexing DW versions. The paper also presents the theoretical upper bound (pessimistic) analysis of the MVJI performance characteristic with respect to I/O operations. The analysis is followed by experimental evaluation. It shows that the MVJI increases a system performance for queries addressing multiple DW versions with exact match and range predicates. ", "keywords": "star query;join index;multiversion data warehouse;multiversion query;multiversion join index", "title": "Multiversion join index for multiversion data warehouse"} {"abstract": "In this paper we present a topologically based approach to the analysis and synthesis of reactive distillation columns. We extend the definition of Tapp et al. [Tapp, M., Holland, S., Glasser, D., & Hildebrandt, D. (2004). Column profile maps part A: Derivation and interpretation. Industrial and Engineering Chemistry Research, 43, 364-374] of a column section in non-reactive distillation column to a reactive column section (RCS) in a reactive distillation column. A RCS is defined as a section of a reactive distillation column in which there is no addition or removal of material or energy. We introduce the concept of a reactive column profile map (RCPM) in which the profiles in the RCPM correspond to the liquid composition profiles in the RCS. By looking at the singular points in the RCPM, it is demonstrated that for a single chemical reaction with no net change in the total number of moles, the bifurcation of the singular points depends on both the difference point as introduced by Hauan et al. [Hauan, S., Ciric, A. R., Westerberg, A. W., & Lien, K. M. (2000). Difference points in extractive and reactive cascades I-Basic properties and analysis. Chemical Engineering Science, 55, 3145-3159] as well as the direction of the stoichiometric vector. These two vectors combine to define what we call the reactive difference point composition. We show that there only certain feasible topologies of the RCPM and these depend only on the position of the reactive difference point composition. We look at a simple example where the vapour liquid equilibrium (VLE) is ideal and show that we can classify regions of reactive difference point compositions that result in similar topology of the RCPM. Thus, by understanding the feasible topologies of the RCPM, one is able to identify profiles in the RCPM that are desirable and hence one is able to synthesize a reactive distillation column by combining RCS that correspond to the desired profile in the RCPM. We believe that this tool will help understand how and when reaction could introduce unexpected behaviors and this can be used as a complementary tool to existing methods used for synthesis of reactive distillation columns. ", "keywords": "reactive column profile map;difference point;reactive column section", "title": "Reactive column profile map topology: Continuous distillation column with non-reversible kinetics"} {"abstract": "It is known that every hypercube Q(n) is a bipartite graph. Assume that n greater than or equal to 2 and F is a subset of edges with F less than or equal to n - 2. We prove that there exists a hamiltonian path in Q(n) - F between any two vertices of different partite sets. Moreover, there exists a path of length 2(n) - 2 between any two vertices of the same partite set. Assume that n greater than or equal to 3 and F is a subset of edges with F less than or equal to n - 3. We prove that there exists a hamiltonian path in Q(n) - {v} - F between any two vertices in the partite set without v. Furthermore, all bounds are tight. ", "keywords": "hamiltonian laceable;hypercube;fault tolerance", "title": "Fault-tolerant hamiltonian laceability of hypercubes"} {"abstract": "Recommender systems are increasingly being employed to personalize services, such as on the web, but also in electronics devices, such as personal video recorders. These recommenders learn a user profile, based on rating feedback from the user on, e.g., books, songs, or TV programs, and use machine learning techniques to infer the ratings of new items. The techniques commonly used are collaborative filtering and naive Bayesian classification, and they are known to have several problems, in particular the cold-start problem and its slow adaptivity to changing user preferences. These problems can be mitigated by allowing the user to set up or manipulate his profile. In this paper, we propose an extension to the naive Bayesian classifier that enhances user control. We do this by maintaining and flexibly integrating two profiles for a user, one learned by rating feedback, and one created by the user. We in particular show how the cold-start problem is mitigated.", "keywords": "multi-valued features;naive bayes;user profile;machine learning;classification;recommender;user control", "title": "incorporating user control into recommender systems based on naive bayesian classification"} {"abstract": "A new population variation approach is proposed, whereby the size of the population is systematically varied during the execution of the genetic programming process with the aim of reducing the computational effort compared with standard genetic programming (SGP). Various schemes for altering population size under this proposal are investigated using a comprehensive range of standard problems to determine whether the nature of the population variation, i.e. the way the population is varied during the search, has any significant impact on GP performance. The initial population size is varied in relation to the initial population size of the SGP such that the worst case computational effort is never greater than that of the SGP. It is subsequently shown that the proposed population variation schemes do have the capacity to provide solutions at a lower computational cost compared with the SGP.", "keywords": "genetic programming;computational effort;average number of evaluations;convergence;population variation", "title": "Population variation in genetic programming"} {"abstract": "Debugging an application for power has a wide array of benefits ranging from minimizing the thermal hotspots to reducing the likelihood of CPU malfunction. In this work, we justify the need for power debugging, and show that performance debugging of a parallel application does not automatically guarantee power balance across multiple cores. We perform experiments and show our results using two case study benchmarks, Volrend from Splash-2 and Bodytrack from Parsec-1.0.", "keywords": "multi-cores;power debugging;power imbalance", "title": "The Need for Power Debugging in the Multi-Core Environment"} {"abstract": "Information technology and its wide range of applications have begun to make their presence in a new generation of logistic and distribution service industry. A more flexible breed of application packages Is emerging by the application of fourth generation language (4GL) technologies, which are able to provide foundations for true enterprise resource planning (ERP). There are many good reasons for adopting enterprise-wide resource planning systems. This research, however, focuses on the development of a human resource assignment module (HR module), usually considered as an essential part of an ERP system. This module provides crucial human resource data and supports decisions in human resource utilization in distribution center operations. We detail the crucial algorithm for the HR module, which provides efficient and effective manpower management for key logistic/distribution center operations.", "keywords": "human resource management;decision-support systems;order picking", "title": "Human resource assignment system for distribution centers"} {"abstract": "We compute using a microscopic mean-field theory the structure and the quasiparticle excitation spectrum of a dilute. trapped Bose-Einstein condensate penetrated by an axisymmetric vortex line. The Gross-Pitaevskii equation for the condensate and the coupled Hartree-Fock-Bogoliubov-Popov equations describing the elementary excitations are solved self-consistently using finite-difference methods. We find locally stable vortex configurations at all temperatures below T-c. ", "keywords": "bose-einstein condensation;vortices;finite-difference methods", "title": "Quantized circulation in dilute Bose-Einstein condensates"} {"abstract": "The design and evaluation of a high performance soft keyboard for mobile systems are described. Using a model to predict the upper-bound text entry rate for soft keyboards, we designed a keyboard layout with a predicted upper-bound entry rate of 58.2 wpm. This is about 35% faster than the predicted rate for a QWERTY layout. We compared our design (OPTI) with a QWERTY layout in a longitudinal evaluation using five participants and 20 45-minute sessions of text entry. Average entry rates for OPT1 increased from 17.0 wpm initially to 44.3 wpm at session 20. The average rates exceeded those for the QWERTY layout after the 10 session (about 4 hours of practice). A regression equation (R = .997) in the form of the power-law of learning predicts that our upper-bound prediction would be reach at about session 50.", "keywords": "pen input;regression;power law;design;high-performance;layout;linguistic models;digraph probabilities;learning;text entry;model;soft keyboards;practical;fitts' law;stylus input;evaluation;participant;mobile systems;predict", "title": "the design and evaluation of a high-performance soft keyboard"} {"abstract": "Using an MPEG-4 MAC (multiple auxiliary component) system is a good way to encode stereoscopic video with existing standard CODECs, especially when it comes to low bitrate applications. In this paper, we discuss the properties and problems of MAC systems when encoding stereoscopic video, and propose an MAC-based stereoscopic video coder and disparity estimation scheme to solve those problems. We used a reconstructed disparity map during the disparity compensation process and took that disparity map into account while estimating the base-view sequence motion vectors. Moreover, we proposed a search range finding and illumination imbalance decision system. We also proposed a block-based disparity map regularization process as well as block splitting in the object boundary and occlusion regions (to reduce the number of bits to encode in both the disparity map and the residual image). Last, we compensated for the imbalance between two cameras with a novel system that used MAC characteristics. Experimental results indicate that the proposed MAC system outperformed conventional stereo coding systems by a maximum of 3.5dB in terms of the PSNR and 1018% in terms of bitsaving, especially in low bitrate applications.", "keywords": "mpeg-4 mac;stereoscopic video coder;disparity estimation", "title": "Stereoscopic video coding and disparity estimation for low bitrate applications based on MPEG-4 multiple auxiliary components"} {"abstract": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "keywords": "low-rank representation;subspace clustering;segmentation;outlier detection", "title": "Robust Recovery of Subspace Structures by Low-Rank Representation"} {"abstract": "In 1989, Pelgrom et al published a mismatch model for MOS transistors, where the variation of parameter mismatch between two identical transistors is given by two independent terms: a size-dependent term and a distance-dependent term. Some CAD tools based on a nonphysical interpretation of Pelgrom's distance term result in excessive computationally expensive algorithms, which become nonviable even for circuits with a reduced number of transistors. Furthermore, some researchers are reporting new variations on the original nonphysically interpreted algorithms, which may render false results. The purpose of this paper is to clarify the physical interpretation of the distance term of Pelgrom et al. and indicate how to model it efficiently in prospective CAD tools.", "keywords": "analog design;mismatch gradient planes;mismatch modeling;pelgrom model;sigma-space analysis", "title": "On an efficient CAD implementation of the distance term in Pelgrom's mismatch model"} {"abstract": "Asteroidal Triple-free (AT-free) graphs have received considerable attention due to their inclusion of various important graphs families, such as interval and cocomparability graphs. The asteroidal number of a graph is the size of a largest subset of vertices such that the removal of the closed neighborhood of any vertex in the set leaves the remaining vertices of the set in the same connected component. (AT-free graphs have asteroidal number at most 2.) In this article, we characterize graphs of bounded asteroidal number by means of a vertex elimination ordering, thereby solving a long-standing open question in algorithmic graph theory. Similar characterizations are known for chordal, interval, and cocomparability graphs.", "keywords": "asteroidal triple;at-free;vertex elimination;asteroidal number;", "title": "Vertex Ordering Characterizations of Graphs of Bounded Asteroidal Number"} {"abstract": "Clustering results validation is an important topic in the context of pattern recognition. We review approaches and systems in this context. In the first part of this paper we presented clustering validity checking approaches based on internal and external criteria. In the second, current part, we present a review of clustering validity approaches based on relative criteria. Also we discuss the results of an experimental study based on widely known validity indices. Finally the paper illustrates the issues that are under-addressed by the recent approaches and proposes the research directions in the field.", "keywords": "clustering validation;pattern discovery;unsupervised learning", "title": "Clustering validity checking methods: Part II"} {"abstract": "The paper justifies the necessity to introduce the students from the 'Computer Systems and Technologies' degree course to the structuce and way of operation of the interrupt system - one of the important components of the processor. Analysis of the basic funcionality of an example interrupt system is presented, an existing interrupt system is selected as a prototype of the training model and the arguments for its selection are proposed. The paper also describes the implemented model and its features. The work with the model will enable students to comprehend the way of operation of the interrupt system and it will be also used to check and assess their knowledge.", "keywords": "interrupt system;training software model;virtual laboratory;simulation", "title": "a training software model of an interrupt system"} {"abstract": "This paper addresses the problem of providing per-connection end-to-end delay guarantees in a high-speed network, We assume that the network is connection oriented and enforces some admission control which ensures that the source traffic conforms to specified traffic characteristics. We concentrate on the class of rate-controlled service (RCS) disciplines, in which traffic from each connection is reshaped at every hop, and develop end-to-end delay bounds for the general case where different reshapers are used at each hop. In addition, we establish that these bounds can also be achieved when the shapers at each hop have the same ''minimal'' envelope. The main disadvantage of this class of service discipline is that the end-to-end delay guarantees are obtained as the sum of the worst-case delays at each node, but we show that this problem can be alleviated through ''proper'' reshaping of the traffic, We illustrate the impact of this reshaping by demonstrating its use in designing RCS disciplines that outperform service disciplines that are based on generalized processor sharing (GPS), Furthermore, we show that we fan restrict the space of ''good'' shapers to a family which is characterized by only one parameter, We also describe extensions to the service discipline that make it work conserving and as a result reduce the average end-to-end delays.", "keywords": "qos provisioning;real-time traffic;traffic shaping;atm;scheduling;end-to-end delay guarantees", "title": "Efficient network QoS provisioning based on per node traffic shaping"} {"abstract": "In this paper, we study the inter-domain Autonomous System (AS)-level routing problem within an alliance of ASs. We first describe the framework of our work, based on the introduction of a service plane for automatic multi-domain service provisioning. We adopt an abstract representation of domain relationships by means of directional metrics which are applied to a triplet (ingress point, transit AS, egress point) where the ingress and egress points can be ASs or routers. Then, we focus on the point-to-point and multipoint AS-level routing problems that arise in such an architecture. We propose an original approach that reaches near optimal solutions with tractable computation times. A further contribution of this paper is that a heavy step in the proposed heuristic can be precomputed, independently of the service demands. Moreover, we describe how in this context AS-level path diversity can be considered, and present the related extension of our heuristic. By extensive tests on AS graphs derived from the Internet, we show that our heuristic is often equal or a few percent close to the optimal, and that, in the case of precomputation, its time consumption can be much lower than with other well-known algorithms.", "keywords": "inter-domain routing;inter-as mpls;qos routing;multipoint routing;as-level routing", "title": "AS-level source routing for multi-provider connection-oriented services"} {"abstract": "This paper describes the parallel simulation of sediment dynamics in shallow water. By using a Lagrangian model, the problem is transformed to one in which a large number of independent particles must be tracked. This results in a technique that can be parallelised with high efficiency. We have developed a sediment transport model using three different sediment suspension methods. The first method uses a modified mean for the Poisson distribution function to determine the expected number of the suspended particles in each particular grid cell of the domain over all available processors. The second method determines the number of particles to suspend with the aid of the Poisson distribution function only in those grid cells which are assigned to that processor. The third method is based on the technique of using a synchronised pseudo-random-number generator to generate identical numbers of suspended particles in all valid grid cells for each processor. Parallel simulation experiments are performed in order to investigate the efficiency of these three methods. Also the parallel performance of the implementations is analysed. We conclude that the second method is the best method on distributed computing systems (e.g., a Beowulf cluster), whereas the third maintains the best load distribution.", "keywords": "lagrangian particle model;stochastic differential equation;sediment transport;parallel processing;speed up;load balance;efficiency", "title": "Parallel and distributed simulation of sediment dynamics in shallow water using particle decomposition approach"} {"abstract": "A frequency domain based algorithm using Fourier approximation and Galerkin error minimization has been used to obtain the periodic orbits of large order nonlinear dynamic systems. The stability of these periodic response is determined through a bifurcation analysis using Floquet theory. This technique is applicable to dynamic systems having both analytic and nonanalytic nonlinearities. This technique is compared with numerical time integration and is found to be much faster in predicting the steady state periodic response.", "keywords": "fouriergalerkinnewton technique;floquet analysis;bifurcations", "title": "A numerical technique to predict periodic and quasi-periodic response of nonlinear dynamic systems"} {"abstract": "In rough set theory, the lower and upper approximation operators can be constructed via a variety of approaches. Various fuzzy generalizations of rough approximation operators have been made over the years. This paper presents a framework for the study of rough fuzzy sets on two universes of discourse. By means of a binary relation between two universes of discourse, a covering and three relations are induced to a single universe of discourse. Based on the induced notions, four pairs of rough fuzzy approximation operators are proposed. These models guarantee that the approximating sets and the approximated sets are on the same universes of discourse. Furthermore, the relationship between the new approximation operators and the existing rough fuzzy approximation operators on two universes of discourse are scrutinized, and some interesting properties are investigated. Finally, the connections of these approximation operators are made, and conditions under which some of these approximation operators are equivalent are obtained.", "keywords": "binary relations;coverings;fuzzy sets;rough fuzzy approximation operators;two universes", "title": "Rough fuzzy approximations on two universes of discourse"} {"abstract": "Diabetes is a disease which occurs when the pancreas does not secrete enough insulin or the body is unable to process it properly. This disease affects slowly the circulatory system including that of the retina. As diabetes progresses, the vision of a patient may start to deteriorate and lead to diabetic retinopathy. In this study on different stages of diabetic retinopathy, 124 retinal photographs were analyzed. As a result, four groups were identified, viz., normal retina, moderate non-proliferative diabetic retinopathy, severe non-proliferative diabetic retinopathy and proliferative diabetic retinopathy. Classification of the four eye diseases was achieved using a three-layer feedforward neural network. The features are extracted from the raw images using the image processing techniques and fed to the classifier for classification. We demonstrate a sensitivity of more than 90% for the classifier with the specificity of 100%.", "keywords": "eye;normal;features;retinopathy;neural network;image processing;feedforward;classification", "title": "Identification of different stages of diabetic retinopathy using retinal optical images"} {"abstract": "This paper proposes a novel thinning algorithm and applies it to automatic constrained ZIP code segmentation. The segmentation method consists of two main stages: removal of rectangle boxes and location of ZIP code digits. Both the two stages are implemented on the skeleton of boxes, which is extracted by the proposed pulse coupled neural network (PCNN) based thinning algorithm. This algorithm is specially designed to merely skeletonize the boxes. At the second stage, a projection method is employed to segment ZIP code image into its constituent digits. Experimental results show that the proposed method is very efficient in segmenting ZIP code images even with noise.", "keywords": "constrained zip code segmentation;pulse coupled neural network;skeleton;projection", "title": "Constrained ZIP code segmentation by a PCNN-based thinning algorithm"} {"abstract": "In this paper, we consider an authentication framework for independent modalities based on binary hypothesis testing using source coding jointly with the random projections. The source coding ensures the multimodal signals reconstruction at the decoder based on the authentication data. The random projections are used to cope with the security, privacy, robustness and complexity issues. Finally, the authentication performance is investigated for both direct and random projections domains. The asymptotic performance approximation is derived and compared with the exact solutions. The impact of modality fusion on the authentication system performance is demonstrated.", "keywords": "dimensionality reduction;fusion;hypothesis testing;random projections;multimodal authentication", "title": "multimodal authentication based on random projections and source coding"} {"abstract": "Alloying elements can substantially affect the formation and morphological stability of nickel monosilicide. A comprehensive study of phase formation was performed on 24 Ni alloys with varying concentrations of alloying elements. Silicide films have been used for more than 15 years to contact the source, drain and gate of state-of-the-art complementary-metal-oxide-semiconductor (CMOS) devices. In the past, the addition of alloying elements was shown to improve the transformation from the high resistivity C49 to the low resistivity C54-TiSi2 phase and to allow for the control of surface and interface roughness of CoSi2 films as well as produce significant improvements with respect to agglomeration of the films. Using simultaneous time-resolved X-ray diffraction (XRD), resistance and light scattering measurements, we follow the formation of the silicide phases in real time during rapid thermal annealing. Additions to the NiSi system lead to modifications in the phase formation sequence at low temperatures (metal-rich phases), to variations in the formation temperatures of NiSi and NiSi2, and to changes in the agglomeration behavior of the films formed. Of the 24 elements studied, additions of Mo, Re, Ta and W are amongst the most efficient to retard agglomeration while elements such as Pd, Pt and Rh are most efficient to retard the formation of NiSi2.", "keywords": "nickel silicides;nisi;alloying;agglomeration;nisi2", "title": "Effects of additive elements on the phase formation and morphological stability of nickel monosilicide films"} {"abstract": "This paper presents two new high-order OTA-C universal filters. The first proposed filter structure employs n + 3 operational transconductance amplifiers (OTAs) and n grounded capacitors, which can realize nth-order multiple-mode (including voltage, current, transadmittance, and transimpedance modes) universal filtering responses (lowpass, highpass, bandpass, bandreject, and allpass) from the same topology. Since the OTA has high input and output impedances, it is very suitable for transadmittance-mode circuit applications. Therefore, a new high-order transadmittance-mode OTA-C universal filter structure using the minimum components is introduced. The second proposed filter structure uses only n + 1 OTAs and n grounded capacitors, which are the minimum components necessary for realizing nth-order transadmittance-mode universal filtering responses (lowpass, highpass, bandpass, bandreject, and allpass) from the same topology. This represents the attractive feature from chip area and power consumption point of view. Moreover, the two new OTA-C universal filters still enjoy many important advantages: no need of extra inverting or double-type amplifiers for special input signals, using only n grounded capacitors, no need of any resistors, cascadably connecting the former voltage-mode stage and the latter current-mode stage, and low sensitivity performance. H-Spice simulations with TSMC 0.35 mu m process and +/- 1.65V supply voltages are included and confirm the theoretical predictions.", "keywords": "operational transconductance amplifiers;multiple-mode;universal high-order filter;transimpedance-mode;transadmittance-mode", "title": "HIGH-ORDER MULTIPLE-MODE AND TRANSADMITTANCE-MODE OTA-C UNIVERSAL FILTERS"} {"abstract": "This paper presents mathematical foundations for studies of random fuzzy fractional integral equations which involve a fuzzy integral of fractional order. We consider two different kinds of such equations. Their solutions have different geometrical properties. The equations of the first kind possess solutions with trajectories of nondecreasing diameter of their consecutive values. On the other hand, the solutions to equations of the second kind have trajectories with nonincreasing diameter of their consecutive values. Firstly, the existence and uniqueness of solutions is investigated. This is showed by using a method of successive approximations. An estimation of error of nth approximation is given. Also a boundedness of the solution is indicated. To show well-posedness of the considered theory, we prove that solutions depend continuously on the data of the equations. Some concrete examples of random fuzzy fractional integral equations are solved explicitly.", "keywords": "random fuzzy fractional integral equation;existence and uniqueness of solution;uncertainty;fuzzy differential equation;set differential equation;mathematical foundations", "title": "Random fuzzy fractional integral equations theoretical foundations"} {"abstract": "Due to resource scarcity, a paramount concern in ad hoc networks is utilizing limited resources efficiently. The self-organized nature of ad hoc networks makes the network utility-based approach an efficient way to allocate limited resources. However, the effect of link instability has not yet been adequately addressed in literature. To efficiently address the routing problem in ad hoc networks, we integrate the cost and stability into a network utility metric, and adopt the metric to evaluate the routing optimality in a unified, opportunistic routing model. Based on this model, an efficient algorithm is designed, both centralized and distributed implementations are presented, and extensive simulations on NS-2 are conducted to verify our results.", "keywords": "ad hoc networks;distributed algorithms;network utility;opportunistic routing;stability", "title": "Efficient Opportunistic Routing in Utility-Based Ad Hoc Networks"} {"abstract": "Single nucleotide polymorphisms (SNPs) and short tandem repeats (STRs) are the most common genetic variations, are widespread within genomes, and form the diversity within species. These genetic variations affect many regulatory elements such as transcription factor binding sites (TFBSs), DNA methylation sites on CpG islands, and microRNA target sites; these elements have been found to play major as well as indirect roles in regulating gene expression. Currently, systems are available to display such genetic variation occurring within regulatory elements. To understand and display all the potential variation described above, we have developed a web-based system tool, the Regulatory Element and Genetic Variation Viewer (REGV Viewer [REGV]), which provides a friendly web interface for users and shows genetic variation information within regulatory elements by either inputting a gene list or selecting a chromosome by name. Moreover, our tool not only supports logic operation queries, but after a query is submitted, it also shows a high-throughput simulation, including combined data, statistical graphs, and graphical views of the genetic variants and regulatory elements. Additionally, when the SNP variation occurs within TFBSs and if the SNP allele frequency and TFBS position weight matrices (PWMs) are available, our system will show the new putative TFBSs resulting from the SNP variation.", "keywords": "genetic variation;snp;tfbs", "title": "A Computation to Integrate the Analysis of Genetic Variations Occurring within Regulatory Elements and Their Possible Effects"} {"abstract": "We present a novel approach to combined textual and visual programming by allowing visual, interactive objects to be embedded within textual source code and segments of source code to be further embedded within those objects. We retain the strengths of text-based source code, while enabling visual programming where it is beneficial. Additionally, embedded objects and code provide a simple object-oriented approach to adding a visual form of LISP-style macros to a language. The ability to freely combine source code and visual, interactive objects with one another allows for the construction of interactive programming tools and experimentation with novel programming language extensions. Our visual programming system is supported by a type coercion-based presentation protocol that displays normal Java and Python objects in a visual, interactive form. We have implemented our system within a prototype interactive programming environment called The Larch Environment. ", "keywords": "java;python;environment;visual programming;interactive;visualization;implementation", "title": "Programs as visual, interactive documents"} {"abstract": "In this paper we analyze the equilibrium limit of the constitutive model for two-phase granular mixtures introduced in Papalexandris (2004) [13], and develop an algorithm for its numerical approximation. At, equilibrium, the constitutive model reduces to a strongly coupled, overdetermined system of quasilinear elliptic partial differential equations with respect to the pressure and the volume fraction of the solid granular phase. First we carry a perturbation analysis based on standard hydrostatic-type scaling arguments which reduces the complexity of the coupling of the equations. The perturbed system is then supplemented by an appropriate compatibility condition which arises from the properties of the gradient operator. Further, based on the Helmholtz decomposition and Ladyzhenskayas decomposition theorem, we develop a projection-type, Successive-Over-Relaxation numerical method. This method is general enough and can be applied to a variety of continuum models of complex mixtures and mixtures with micro-structure. We also prove that this method is both stable and consistent hence, under standard assumptions, convergent. The paper concludes with the presentation of representative numerical results.", "keywords": "granular mixtures;complex fluids;overdetermined elliptic systems;ladyzhenskayas theorem;successive-over-relaxation;predictorcorrector methods", "title": "The equilibrium limit of a constitutive model for two-phase granular mixtures and its numerical approximation"} {"abstract": "In this paper, an approach for Sonar targets analysis based on a new energy-time-frequency representation, called Teager-Huang Transform (THT), is presented. The THT is the combination of the empirical mode decomposition of Huang and the Teager-Kaiser signal demodulation method. The THT is free of interferences and does not requires basis functions for signals decomposition. The analysis is carried out, in free field, from the impulse responses of Sonar targets. We compare the analysis results of impulse responses of spherical and cylindrical targets given by THT to those of the smoothed Wigner-Ville transformation.", "keywords": "time frequency analysis;empirical mode decomposition;teager-kaiser energy operator;teager huang transform ;sonar echos.", "title": "Analysis of sonar targets by teager-huang transform (THT)"} {"abstract": "In September 2005, the international information technology standard body Object Management Group (OMG) published a Request for Proposal (RFP) for an international standard for Knowledge Based Engineering (KBE) Services for Product Lifecycle Management (PLM). The standard aims to facilitate the integration of KBE applications in a PLM environment. KBE has been used in key engineering industry to deliver significant business benefits and has been a catalyst for changes in engineering processes. In recent years, mainstream CAD vendors begin to incorporate KBE functionalities in their solutions. PLM is evolving from the platform to manage engineering data to the repository of complete enterprise knowledge. As CAD becomes more knowledge based, the convergence of KBE and PLM is expected to happen soon. The OMG standard RFP is an action to accelerate this convergence. The RFP is the result of an international effort with a team that includes engineering end users, software vendors and researchers. This paper presents the essence and the development process of the RFP to widen the engagement with the engineering research community.", "keywords": "knowledge based engineering;product lifecycle management;engineering knowledge management", "title": "International Standard Development for Knowledge Based Engineering Services for Product Lifecycle Management"} {"abstract": "Topic model is a powerful tool for the basic document or image processing tasks. In this study we introduce a novel image topic model, called Latent Patch Model (LPM), which is a generative Bayesian model and assumes that the image and pixels are connected by a latent patch layer. Based on the LPM, we further propose an image denoising algorithm namely multiple estimate LPM (MELPM). Unlike other works, the proposed denoising framework is totally implemented on the latent patch layer, and it is effective for both Gaussian white noises and impulse noises. Experimental results demonstrate that LPM performs well in representing images. And its application in image denoising achieves competitive PSNR and visual quality with conventional algorithms.", "keywords": "topic model;denoising;patch clustering;semantic learning", "title": "An image topic model for image denoising"} {"abstract": "With the growing complexity of web applications, identifying web interfaces that can be used for testing such applications has become increasingly challenging. Many techniques that work effectively when applied to simple web applications are insufficient when used on modern, dynamic web applications, and may ultimately result in inadequate testing of the applications' functionality. To address this issue, we present a technique for automatically discovering web application interfaces based on a novel static analysis algorithm. We also report the results of an empirical evaluation in which we compare our technique against a traditional approach. The results of the comparison show that our technique can (1) discover a higher number of interfaces and (2) help generate test inputs that achieve higher coverage.", "keywords": "interface extraction;help;web-application testing;applications;test;dynamic;functional;addressing;web interface;static analysis;test case generation;interfaces;discoveries;web application;complexity;algorithm;coverage;comparisons;empirical evaluation;automation", "title": "improving test case generation for web applications using automated interface discovery"} {"abstract": "This paper describes a new numerical procedure, based on point collocation, integrated multiquadric functions and Cartesian grids, for the discretisation of the stream-function formulation for flows of a Newtonian fluid in multiply-connected domains. Three particular issues, namely (i) the derivation of the stream-function values on separate boundaries, (ii) the implementation of cross derivatives in irregular regions, and (iii) the treatment of double boundary conditions, are studied in the context of Cartesian grids and approximants based on integrated multiquadric functions in one dimension. Several test problems, i.e. steady flows between a rotating circular cylinder and a fixed square cylinder and also between eccentric cylinders maintained at different temperatures, are investigated. Results obtained are compared well with numerical data available in the literature. ", "keywords": "stream-function formulation;multiply-connected domain;integrated radial-basis-function network;cartesian grid", "title": "Numerical study of stream-function formulation governing flows in multiply-connected domains by integrated RBFs and Cartesian grids"} {"abstract": "The influence of external physical variation such as temperature fluctuations on near-infrared (NIR) spectra and their effect on the predictive power of calibration models such as PLS have been studied. Different methods to correct for the temperature effect by explicitly including the temperature in a calibration model have been tested. The results are compared to the implicit inclusion, which takes the temperature into account only through the calibration design. Two data sets are used, one well-designed data set measured in the laboratory and one industrial data set consisting of measurements for process samples. For both data sets, the explicit inclusion of the temperature in the calibration models did not result in an improvement of the prediction accuracy compared to implicit inclusion. ", "keywords": "spectral variation;temperature;nir spectra", "title": "Linear techniques to correct for temperature-induced spectral variation in multivariate calibration"} {"abstract": "A new variation of Overlapping B+ -trees is presented, which provides efficient indexing of transaction time and keys in a two dimensional key-time space. Modification operations (i.e. insertions, deletions and updates) are allowed at the current version, whereas queries are allowed to any temporal version, i.e. either in the current or in past versions. Using this structure, snapshot and range-timeslice queries can be answered optimally. However, the fundamental objective of the proposed method is to deliver efficient performance in case of a general pure-key query (i.e. 'history of a key'). The trade-off is a small increase in time cost for version operations and storage requirements. ", "keywords": "temporal databases;transaction time;access methods;indexing;algorithms;time and space performance", "title": "Overlapping B+-trees: An implementation of a transaction time access method"} {"abstract": "This study applies Five Forces Analysis to evaluate and select market segments for international business using a strategy-aligned fuzzy approach. An illustration segment evaluation procedure is used to demonstrate that our procedure is an effective quantification approach for integrating five forces, generic strategies and marketing information in a group decision-making process. The final decision-maker (DM) synthesizes the total crisp scores of individual alternatives by choosing judgmental coefficients ? based on individual attitude towards core business competitiveness and market risks to accommodate differences among market segments to the specific environment with a better understanding of the decision problem and individual decision-making behavior. In the illustration presented here, the final solution is then obtained by identifying the best market segment for further development and negotiation.", "keywords": "market segment evaluation;market segment selection;fuzzy factor rating system;strategy alignment;multiple attributes decision-making", "title": "Using a strategy-aligned fuzzy competitive analysis approach for market segment evaluation and selection"} {"abstract": "In this paper, a new hierarchical classification method based on the use of various types of AdaBoost classification algorithms is proposed for automatic classification of marble slab images according to their quality. At first, features are extracted using the sum and difference histograms method and, at the second stage, different versions of the AdaBoost algorithms are used as classifiers together with those extracted features in a proposed hierarchical fashion. Performance of the proposed method is compared against performances of different types of neural network classifiers and a support vector machine (SVM) classifier. Computational results show that the proposed hierarchical structure employing AdaBoost algorithms performs superior to neural networks and the SVM classifier for classifying marble slab images in our large and diversified data set. ", "keywords": "classification of marble slab images;hierarchical classification;adaboost classification algorithms", "title": "Using AdaBoost classifiers in a hierarchical framework for classifying surface images of marble slabs"} {"abstract": "Stochastic integer programs (SIPs) represent a very difficult class of optimization problems arising from the presence of both uncertainty and discreteness in planning and decision problems. Although applications of SIPs are abundant, nothing is available by way of computational software. On the other hand, commercial software packages for solving deterministic integer programs have been around for quite a few years, and more recently, a package for solving stochastic linear programs has been released. In this paper, we describe how these software tools can be integrated and exploited for the effective solution of general-purpose SIPs. We demonstrate these ideas on four problem classes from the literature and show significant computational advantages.", "keywords": "stochastic programming;integer programming;branch and bound;software", "title": "On bridging the gap between stochastic integer programming and MIP solver technologies"} {"abstract": "To better understand the topic of this colloquium, we have created a series of databases related to knowledge domains (dynamic systems [small world/Milgram], information visualization [Tufte], co-citation [Small], bibliographic coupling [Kessler], and scientometrics [Scientometrics]). I have used a software package called HistCite(TM) which generates chronological maps of subject (topical) collections resulting from searches of the ISI Web of Science(R) or ISI citation indexes (SCI, SSCI, and/or AHCI) on CD-ROM. When a marked list is created on WoS, an export file is created which contains all cited references for each source document captured. These bibliographic collections, saved as ASCII files, are processed by HistCite in order to generate chronological and other tables as well as historiographs which highlight the most-cited works in and outside the collection. HistCite also includes a module for detecting and editing errors or variations in cited references as well as a vocabulary analyzer which generates both ranked word lists and word pairs used in the collection. Ideally the system will be used to help the searcher quickly identify the most significant work on a topic and trace its year-by-year historical development. In addition to the collections mentioned above, historiographs based on collections of papers that cite the Watson-Crick 1953 classic paper identifying the helical structure of DNA were created. Both year-by-year as well as month-by-month displays of papers from 1953 to 1958 were necessary to highlight the publication activity of those years.", "keywords": "mapping;knowledge domains;small world concept;dna structure;citation analysis;historiography;information visualization;software;histcite", "title": "Historiographic mapping of knowledge domains literature"} {"abstract": "Botnets have continuously evolved since their inception as a malicious entity. Attackers come up with new botnet designs that exploit the weaknesses in existing defense mechanisms and continue to evade detection. It is necessary to analyze the weaknesses of existing defense mechanisms to find out the lacunae in them. This research exposes a weakness found in an existing bot detection method (BDM) by implementing a specialized P2P botnet model and carrying out experiments on it. Weaknesses that are found and validated can be used to predict the development path of botnets, and as a result, detection and mitigation measures can be implemented in a proactive fashion. The main contribution of this work is to demonstrate the exploitation pattern of an inherent weakness in local-host alert correlation (LHAC) based methods and to assert that current LHAC implementations could allow pockets of cooperative bots to hide in an enterprise size network. This work suggests that additional monitoring capabilities must be added to current LHAC-based methods in order for them to remain a viable bot detection mechanism. ", "keywords": "botnet;p2p;security;network;covert", "title": "Bot detection evasion: a case study on local-host alert correlation bot detection methods"} {"abstract": "A new copper plating bath for electroless deposition directly on conductive copper-diffusion barrier layers has been developed. This plating bath can be operated at temperatures between 20 and 50C and has good stability. High temperature processing allows for increased deposition rates and decreased specific resistivity values for the deposited copper films. Electroless Cu films deposited from this bath showed a conformal step coverage in high aspect ratio trenches and, therefore, are promising as seed layers for copper electroplating. The effect of the bath composition, activation procedure and processing temperature on the plating rate and morphology of the deposited copper has been studied and is presented here.", "keywords": "interconnection;copper;electroless deposition", "title": "New plating bath for electroless copper deposition on sputtered barrier layers"} {"abstract": "This paper presents an automatic locally adaptive finite element solver for the fully-coupled EHL point contact problems. The proposed algorithm uses a posteriori error estimation in the stress in order to control adaptivity in both the elasticity and lubrication domains. The implementation is based on the fact that the solution of the linear elasticity equation exhibits large variations close to the fluid domain on which the Reynolds equation is solved. Thus the local refinement in such region not only improves the accuracy of the elastic deformation solution significantly but also yield an improved accuracy in the pressure profile due to increase in the spatial resolution of fluid domain. Thus, the improved traction boundary conditions lead to even better approximation of the elastic deformation. Hence, a simple and an effective way to develop an adaptive procedure for the fully-coupled EHL problem is to apply the local refinement to the linear elasticity mesh. The proposed algorithm also seeks to improve the quality of refined meshes to ensure the best overall accuracy. It is shown that the adaptive procedure effectively refines the elements in the region(s) showing the largest local error in their solution, and reduces the overall error with optimal computational cost for a variety of EHL cases. Specifically, the computational cost of proposed adaptive algorithm is shown to be linear with respect to problem size as the number of refinement levels grows.", "keywords": "elastohydrodynamic lubrication;finite element method;linear elasticity;fully coupled approach;adaptive h-refinement;optimization of meshes", "title": "An adaptive finite element procedure for fully-coupled point contact elastohydrodynamic lubrication problems"} {"abstract": "In this paper we study distributed algorithms on massive graphs where links represent a particular relationship between nodes (for instance, nodes may represent phone numbers and links may indicate telephone calls). Since such graphs are massive they need to be processed in a distributed way. When computing graph-theoretic properties, nodes become natural units for distributed computation. Links do not necessarily represent communication channels between the computing units and therefore do not restrict the communication flow. Our goal is to model and analyze the computational power of such distributed systems where one computing unit is assigned to each node. Communication takes place on a whiteboard where each node is allowed to write at most one message. Every node can read the contents of the whiteboard and, when activated, can write one small message based on its local knowledge. When the protocol terminates its output is computed from the final contents of the whiteboard. We describe four synchronization models for accessing the whiteboard. We show that message size and synchronization power constitute two orthogonal hierarchies for these systems. We exhibit problems that separate these models, i.e., that can be solved in one model but not in a weaker one, even with increased message size. These problems are related to maximal independent set and connectivity. We also exhibit problems that require a given message size independently of the synchronization model.", "keywords": "distributed computing;local computation;graph properties;bounded communication", "title": "Allowing each node to communicate only once in a distributed system: shared whiteboard models"} {"abstract": "After a complete spinal cord injury (SCI) at the lowest thoracic level (T13), adult cats trained to walk on a treadmill can recover hindlimb locomotion within 23 weeks, resulting from the activity of a spinal circuitry termed the central pattern generator (CPG). The role of this spinal circuitry in the recovery of locomotion after partial SCIs, when part of descending pathways can still access the CPG, is not yet fully understood. Using a dual spinal lesion paradigm (first hemisection at T10 followed three weeks after by a complete spinalization at T13), we showed that major changes occurred in this locomotor spinal circuitry. These plastic changes at the spinal cord level could participate in the recovery of locomotion after partial SCI. This short review describes the main findings of this paradigm in adult cats.", "keywords": "central pattern generator;plasticity;training;spinal cord injury;locomotion", "title": "A dual spinal cord lesion paradigm to study spinal locomotor plasticity in the cat"} {"abstract": "We present the way in which we have constructed an implementation of a sparse Cholesky factorization based on a hypermatrix data structure. This data structure is a storage scheme which produces a recursive 2D partitioning of a sparse matrix. It can be useful on some large sparse matrices. Subblocks are stored as dense matrices. Thus, efficient BLAS3 routines can be used. However, since we are dealing with sparse matrices some zeros may be stored in those dense blocks. The overhead introduced by the operations on zeros can become large and considerably degrade performance. We present the ways in which we deal with this overhead. Using matrices from different areas (Interior Point Methods of linear programming and Finite Element Methods), we evaluate our sequential in-core hypermatrix sparse Cholesky implementation. We compare its performance with several other codes and analyze the results. In spite of using a simple fixed-size partitioning of the matrix our code obtains competitive performance.", "keywords": "sparse cholesky;hypermatrix structure;2d partitioning;windows in submatrices;small matrix library", "title": "Analysis of a sparse hypermatrix Cholesky with fixed-sized blocking"} {"abstract": "H depassivation lithography is a process by which a monolayer of H absorbed on a Si(100) 21 surface may be patterned by the removal of H atoms using a scanning tunneling microscope. This process can achieve atomic resolution where individual atoms are targeted and removed. This paper suggests that such a patterning process can be carried out as a digital process, where the pixels of the pattern are the individual H atoms. The goal is digital fabrication rather than digital information processing. The margins for the read and write operators appear to be sufficient for a digital process, and the tolerance for physical addressing of the atoms is technologically feasible. A digital fabrication process would enjoy some of the same advantages of digital computation; namely high reliability, error checking and correction, and the creation of complex systems.", "keywords": "lithography;scanning tunneling microscope;si;hydrogen depassivation;digital process", "title": "Atomic precision patterning on Si: An opportunity for a digitized process"} {"abstract": "A program mode is a regular trajectory of the execution of a program that is determined by the values of its input variables. By exploiting program modes, we may make worst-case execution time (WCET) analysis more precise. This paper presents a novel method to automatically find program modes and calculate the WCET estimates of programs. First, the modes of a program will be identified automatically by mode-relevant program slicing, and the precondition will be calculated for each mode using a path-wise test data generation method. Then, for each feasible mode, we show how to calculate its WCET estimate for modern reduced instruction set computer (RISC) processors with caches and pipelines and for traditional complex instruction set computer (CISC) processors. We also present a method to obtain the symbolic expression for each mode for CISC processors. The experimental results show the effectiveness of the method.", "keywords": "real-time systems;wcet analysis;program mode;program slicing;iterative relaxation method", "title": "Automated Worst-Case Execution Time Analysis Based on Program Modes"} {"abstract": "Purpose - The paper aims to assess the utility of non-agriculture-specific information systems, databases, and respective controlled vocabularies (thesauri) in organising and retrieving agricultural information. The purpose is to identify thesaurus-linked tree structures, controlled subject headings/terms (heading words, descriptors), and principal database-dependent characteristics and assess how controlled terms improve retrieval results (recall) in relation to free-text/uncontrolled terms in abstracts and document titles. Design/methodology/approach - Several different hosts (interfaces, platforms, portals) and databases were used: CSA Illumina.(ERIC, LISA), Ebscohost (Academic Search Complete, Medline, Political Science Complete), Ei-Engineering Village (Compendex, Inspec), OVID (PsycINFO), ProQuest (ABI/Inform Global). The search-terms agriculture and agricultural and truncated word-stem agricultur- were employed. Permuted (rotated index) search fields were used to retrieve terms from thesauri. Subject-heading search was assessed in relation to free-text search, based on abstracts and document titles. Findings - All thesauri contain agriculture-based headings; however, associative, hierarchical and synonymous relationships show important inter-database differences. Using subject headings along with abstracts and titles in search syntax (query) sometimes improves retrieval by up to 60 per cent. Retrieval depends on search fields and database-specifics, such as autostemming (lemmatization), explode function, word-indexing, or phrase-indexing. Research limitations/implications - Inter-database and host comparison, on consistent principles, can be limited because of some particular host- and database-specifics. Practical implications - End-users may exploit databases more competently and thus achieve better retrieval results in searching for agriculture-related information. Originality/value - The function of as many as ten databases in different disciplines in providing information relevant to subject matter that is not a topical focus of databases is assessed.", "keywords": "thesauri;controlled vocabularies;indexing;subject headings;databases;agriculture", "title": "Non-agricultural databases and thesauri Retrieval of subject headings and non-controlled terms in relation to agriculture"} {"abstract": "Packet classification categorizes incoming packets into multiple forwarding classes based on pre-defined filters. This categorization makes information accessible for quality of service or security handling in the network. In this paper, we propose a scheme which combines the Aggregate Bit Vector algorithm and the pruned Tuple Space Search algorithm to improve the performance of packet classification in terms of speed and storage. We also present the procedures of incremental update. Our scheme is evaluated with filter databases of varying sizes and characteristics. The experimental results demonstrate that our scheme is feasible and scalable.", "keywords": "packet classification;network intrusion defection systems;firewalls;qos;packet forwarding", "title": "Efficient Packet Classification with a Hybrid Algorithm"} {"abstract": "A relevant climate feature of the Intra-Americas Sea (IAS) is the low-level jet (IALLJ) dominating the IAS circulation, both in summer and winter; and yet it is practically unknown with regard to its nature, structure, interactions with mid-latitude and tropical phenomena, and its role in regional weather and climate. This paper updates IALLJ current knowledge and its contribution to IAS circulationprecipitation patterns and presents recent findings about the IALLJ based on first in situ observations during Phase 3 of the Experimento Climtico en las Albercas de Agua Clida (ECAC), an international field campaign to study IALLJ dynamics during July 2001. Nonhydrostatic fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) simulations were compared with observations and reanalysis. Large-scale circulation patterns of the IALLJ northern hemisphere summer and winter components suggest that trades, and so the IALLJ, are responding to landocean thermal contrasts during the summer season of each continent. The IALLJ is a natural component of the American monsoons as a result of the continent's approximate northsouth land distribution. During warm (cold) El NioSouthern Oscillation phases, winds associated with the IALLJ core (IALLJC) are stronger (weaker) than normal, so precipitation anomalies are positive (negative) in the western Caribbean near Central America and negative (positive) in the central IAS. During the ECAC Phase 3, strong surface winds associated with the IALLJ induced upwelling, cooling down the sea surface temperature by 12 C. The atmospheric mixed layer height reached 1 km near the surface wind maximum below the IALLJC. Observations indicate that primary water vapor advection takes place in a shallow layer between the IALLJC and the ocean surface. Latent heat flux peaked below the IALLJC. Neither the reanalysis nor MM5 captured the observed thermodynamic and kinematic IALLJ structure. So far, IALLJ knowledge is based on either dynamically initialized data or simulations of global (regional) models, which implies that a more systematic and scientific approach is needed to improve it. The Intra-Americas Study of Climate Processes is a great regional opportunity to address trough field work, modeling, and process studies, many of the IALLJ unknown features.", "keywords": "intra-americas low-level jet;tropical climate variability;mm5 modeling;el niosouthern oscillation;enso", "title": "The Intra-Americas Sea Low-level Jet"} {"abstract": "For the design of an \"intelligent\" assistant system aimed at supporting operators' decision in subway control, we modeled operators' activity and know-how. As a result, we introduce the notion of a contextual graph, which appears as a simple solution to describe and manage operational decision-making.", "keywords": "context representation;contextual graphs;decision tree;knowledge representation;operational knowledge", "title": "Operational knowledge representation for practical decision-making"} {"abstract": "The multiagent workflow systems can be formalized from an organizational structure viewpoint, which includes three parts: the interaction structure among agents, the temporal flow of activities, and the critical resource sharing relations among activities. While agents execute activities, they should decide their strategies to satisfy the constraints brought by the organizational structure of multiagent workflow system. To avoid collisions in the multiagent workflow system, this paper presents a method to determine social laws in the system to restrict the strategies of agents and activities; the determined social laws can satisfy the characteristics of organization structures so as to minimize the conflicts among agents and activities. Moreover, we also deal with the social law adjustment mechanism for the alternations of interaction relations, temporal flows, and critical resource sharing relations. It is proved that our model can produce useful social laws for organizational structure of multiagent workflow systems, i.e., the conflicts brought by the constraints of organization structure can be minimized.", "keywords": "multiagents;.workflows;coordination;social laws;social strategies;organizational structures", "title": "ORGANIZATIONAL STRUCTURE-SATISFACTORY SOCIAL LAW DETERMINATION IN MULTIAGENT WORKFLOW SYSTEMS"} {"abstract": "A content-aware retargeting method is proposed for adapting soccer video to heterogeneous terminals. According to domain-specific knowledge, ball, player and player's face are defined as user interested objects (UIOs) in different view-types. The UIOs are extracted by semantic analysis on soccer video, and then a region of interest (ROI) of each shot is determined jointly by three factors: terminal size, scaling factor and aspect ratio. The proposed method optimizes the retargeted region to contain more semantic content while adapting the constraint of terminal screen. The simulation results prove that the proposed CAR system wins better viewing experiences than the traditional methods such as resizing in a \"Letter box\" mechanism or cropping directly.", "keywords": "video retargarting;user interested object;region of interest;video analysis;view-type", "title": "CONTENT-AWARE RETARGETING FOR SOCCER VIDEO ADAPTATION"} {"abstract": "It is generally accepted that the translation rate depends on the availability of cognate aa-tRNAs. In this study it is shown that the key factor that determines translation rate is the competition between near-cognate and cognate aa-tRNAs. The transport mechanism in the cytoplasm is diffusion, thus the competition between cognate, near-cognate and non-cognate aa-tRNAs to bind to the ribosome is a stochastic process. Two competition measures are introduced; C(i) and R(i) (i=1, 64) are quotients of the arrival frequencies of near-cognates vs. cognates and non-cognates vs. cognates, respectively. Furthermore, the reaction rates of bound cognates differ from those of bound near-cognates. If a near-cognate aa-tRNA binds to the A site of the ribosome, it may be rejected at the anti-codon recognition step or proofreading step or it may be accepted. Regardless of its fate, the near-cognates and non-cognates have caused delays of varying duration to the observed rate of translation. Rate constants have been measured at a temperature of 20C by (Gromadski, K.B., Rodnina, M.V., 2004. Kinetic determinants of high-fidelity tRNA discrimination on the ribosome. Mol. Cell 13, 191200). These rate constants have been re-evaluated at 37C, using experimental data at 24.5C and 37C (Varenne, S., et al., 1984. Translation in a non-uniform process: effect of tRNA availability on the rate of elongation of nascent polypeptide chains. J. Mol. Biol. 180, 549576). The key results of the study are: (i) the average time (at 37C) to add an amino acid, as defined by the ith codon, to the nascent peptide chain is: ?(i)=9.06+1.445[10.48C(i)+0.5R(i)] (in ms); (ii) the misreading frequency is directly proportional to the near-cognate competition, E(i)=0.0009C(i); (iii) the competition from near-cognates, and not the availability of cognate aa-tRNAs, is the most important factor that determines the translation rate the four codons with highest near-cognate competition (in the case of E. coli) are [GCC]>[CGG]>[AGG]>[GGA], which overlap only partially with the rarest codons: [AGG]<[CCA]<[GCC]<[CAC]; (iv) based on the kinetic rates at 37C, the average time to insert a cognate amino acid is 9.06ms and the average delay to process a near-cognate aa-tRNA is 10.45ms and (vii) the model also provides estimates of the vacancy times of the A site of the ribosome an important factor in frameshifting.", "keywords": "ribosome kinetics;translation;trna availability;mistranslation frequencies", "title": "Ribosome kinetics and aa-tRNA competition determine rate and fidelity of peptide synthesis"} {"abstract": "In this paper, we propose an efficient and spectrally accurate numerical method for computing the dynamics of rotating Bose-Einstein condensates (BEC) in two dimensions (2D) and 3D based on the Gross-Pitaevskii equation (GPE) with an angular momentum rotation term. By applying a time-splitting technique for decoupling the nonlinearity and properly using the alternating direction implicit (ADI) technique for the coupling in the angular momentum rotation term in the GPE, at every time step, the GPE in rotational frame is decoupled into a nonlinear ordinary differential equation (ODE) and two partial differential equations with constant coefficients. This allows us to develop new time-splitting spectral methods for computing the dynamics of BEC in a rotational frame. The new numerical method is explicit, unconditionally stable, and of spectral accuracy in space and second-order accuracy in time. Moreover, it is time reversible and time transverse invariant, and conserves the position density in the discretized level if the GPE does. Extensive numerical results are presented to confirm the above properties of the new numerical method for rotating BEC in 2D and 3D. ", "keywords": "rotating bose-einstein condensates;gross-pitaevskii equation;angular momentum rotation;time spitting", "title": "An efficient and spectrally accurate numerical method for computing dynamics of rotating Bose-Einstein condensates"} {"abstract": "The following problem arises in thtr context of parallel computation: how many bits of information are required to specify any one element from an arbitrary (nonempty) X-subset of a set? We characterize optimal coding techniques for this problem. We calculate the asymptotic behavior of the amount of information necessary, and construct an algorithm that specifies an element from a subset in an optimal manner.", "keywords": "coding;energy;parallel computation;traces", "title": "The entropy of traces in parallel computation"} {"abstract": "Now that the human genome has been mapped, a new challenge has emerged", "keywords": "proteomics;cancer;lymph;lymphatic endothelium;lymphedema;laser capture microdissection;protein microarrays;seldi-tof-mass spectroscopy", "title": "Proteomic Technologies to Study Diseases of the Lymphatic Vascular System"} {"abstract": "In this paper we detail the design and implementation of an Eclipse plug-in for an integrated, model-based approach, to the engineering of web service compositions. The plug-in allows a designer to specify a service's obligations for coordinated web service compositions in the form of Message Sequence Charts (MSCs) and then generate policies in the form of WS-CDL and services in the form of BPEL4WS. The approach uses finite state machine representations of web service compositions and service choreography rules, and assigns semantics to the distributed process interactions. The move towards implementing web service choreography requires design time verification of these service interactions to ensure that service implementations fulfill requirements for multiple interested partners before such compositions and choreographies are deployed. The plug-in provides a tool for integrated specification, formal modeling, animation and providing verification results from choreographed web service interactions. The LTSA-Eclipse (for Web Services) plug-in is publicly available, along with other plug-ins, at: http://www.doc.ic.ac.uk/ltsa.", "keywords": "eclipse plug-in;verification;service design;web service choreography;standards;validation;implementation;model checking", "title": "leveraging eclipse for integrated model-based engineering of web service compositions"} {"abstract": "Histone modifications are important epigenetic regulators and play a critical role in development. The targeting mechanism for histone modifications is complex and still incompletely understood. Here we applied a computational approach to predict genome-scale histone modification targets in humans by the genomic DNA sequences using a set of recent ChIP-seq data. We found that a number of histone modification marks could be predicted with high accuracy. On the other hand, the impact of DNA sequences for each mark is intrinsically different dependent upon the target-and tissue-specificity. Diverse patterns are associated with different repetitive elements. Unexpectedly, we found that non-overlapping, functionally opposite histone modification marks could share similar sequence features. We propose that these marks may target a common set of loci but are mutually exclusive and that the competition may be important for developmental control. Taken together, we show that our computational approach has provided new insights into the targeting mechanism of histone modifications.", "keywords": "dna sequence;histone modification;human", "title": "Targeted Recruitment of Histone Modifications in Humans Predicted by Genomic Sequences"} {"abstract": "This paper describes a robust method for crease detection and curvature estimation on large, noisy triangle meshes. We assume that these meshes are approximations of piecewise-smooth surfaces derived from range or medical imaging systems and thus may exhibit measurement or even registration noise. The proposed algorithm, which we call normal vector voting, uses an ensemble of triangles in the geodesic neighborhood of a vertex-instead of its simple umbrella neighborhood-to estimate the orientation and curvature of the original surface at that point. With the orientation information, we designate a vertex as either lying on a smooth surface, following a crease discontinuity, or having no preferred orientation. For vertices on a smooth surface, the curvature estimation yields both principal curvatures and principal directions while for vertices on a discontinuity we estimate only the curvature along the crease. The last case for no preferred orientation occurs when three or more surfaces meet to form a corner or when surface noise is too large and sampling density is insufficient to determine orientation accurately. To demonstrate the capabilities of the method, we present results for both synthetic and real data and compare these results to the G. Taubin (1995, in Proceedings of the Fifth International Conference on Computer Vision, pp. 902-907) algorithm. Additionally, we show practical results for several large mesh data sets that are the motivation for this algorithm. ", "keywords": "curvature estimation;normal vector estimation;crease detection;dense triangle meshes;piecewise-smooth surfaces", "title": "Normal vector voting: Crease detection and curvature estimation on large, noisy meshes"} {"abstract": "Given the exponential growth of videos published on the Internet, mechanisms for clustering, searching, and browsing large numbers of videos have become a major research area. More importantly, there is a demand for event detectors that go beyond the simple finding of objects but rather detect more abstract concepts, such as \"feeding an animal\" or a \"wedding ceremony\". This article presents an approach for event classification that enables searching for arbitrary events, including more abstract concepts, in found video collections based on the analysis of the audio track. The approach does not rely on speech processing, and is language-indepent, instead it generates models for a set of example query videos using a mixture of two types of audio features: Linear-Frequency Cepstral Coefficients and Modulation Spectrogram Features. This approach can be used in complement with video analysis and requires no domain specific tagging. Application of the approach to the TRECVid MED 2011 development set, which consists of more than 4000 random \"wild\" videos from the Internet, has shown a detection accuracy of 64% including those videos which do not contain an audio track.", "keywords": "multimedia event detection;trecvid;audio processing", "title": "acoustic super models for large scale video event detection"} {"abstract": "Material addition using focused ion beam induced deposition (FIBID) is a well-established local deposition technology in microelectronic engineering. We investigated FIBID characteristics as a function of beam overlap using phenanthrene molecules. To initiate the localization of gas molecules, we irradiated the ion beams using a raster scan. We varied the beam overlap between ?900% and 50% by adjusting the pixel size from 300nm to 15nm. We discuss the changes in surface morphologies and deposition rates due to delocalization by the range effect of excited surface atoms, the divided structure by continuous effect from the raster scan, enhanced localization by discrete effect from replenished gas molecules, the competition between deposition and sputtering processes, and the change in processing time with scan speed (smaller overlap case).", "keywords": "fibid;raster scan;phenanthrene gas;carbon;beam overlap", "title": "Morphological influence of the beam overlap in focused ion beam induced deposition using raster scan"} {"abstract": "Nowadays it is vital to design robust mechanisms to provide QoS for multimedia applications as an integral part of the network traffic. The main goal of this paper is to provide an efficient rate control scheme to support content-aware video transmission mechanism with buffer underflow avoidance at the receiver in congested networks. Towards this, we introduce a content-aware time-varying utility function, in which the quality impact of video content is incorporated into its mathematical expression. Moreover, we analytically model the buffer requirements of video sources in two ways: first as constraints of the optimization problem to guarantee a minimum rate demand for each source, and second as a penalty function embedded as part of the objective function attempting to achieve the highest possible rate for each source. Then, using the proposed analytical model, we formulate a dynamic network utility maximization problem, which aims to maximize the aggregate hybrid objective function of sources subject to capacity and buffer constraints. Finally, using primaldual method, we solve DNUM problem and propose a distributed algorithm called CA-DNUM that optimally allocates the shared bandwidth to video streams. The experimental results demonstrate the efficacy and performance improvement of the proposed content-aware rate allocation algorithm for video sources in different scenarios.", "keywords": "video streaming;dynamic network utility maximization;content-aware video utility model;buffer underflow avoidance;convex optimization", "title": "Content-aware rate allocation for efficient video streaming via dynamic network utility maximization"} {"abstract": "Articulatory parameters, vocal tract shape and cross-sectional area function were determined from fricative spectra. A model of fricative generation was used for providing acoustical constraints for an optimization procedure with muscles work as the criterion of optimality. A distance between spectra was measured with the use of the Cauchy-Bounjakovsky non-equality. A proper initial approximation of articulatory parameters is required to obtain an accurate and stable solution of the inverse problem.", "keywords": "speech;inverse problem;vocal tract shape;fricatives;optimization", "title": "INVERSE PROBLEM FOR FRICATIVES"} {"abstract": "The tremendous growth in volume of web usage data results in the boost of web mining research with focus on discovering potentially useful knowledge from web usage data. This paper presents a new web usage mining process for finding sequential patterns in web usage data which can be used for predicting the possible next move in browsing sessions for web personalization. This process consists of three main stages: preprocessing web access sequences from the web server log, mining preprocessed web log access sequences by a tree-based algorithm, and predicting web access sequences by using a dynamic clustering-based model. It is designed based on the integration of the dynamic clustering-based Markov model with the Pre-Order Linked WAP-Tree Mining (PLWAP) algorithm to enhance mining performance. The proposed mining process is verified by experiments with promising results.", "keywords": "sequential patterns;web usage mining ;pre-order linked wap-tree ;markov model;web access patterns ", "title": "efficient web usage mining process for sequential patterns"} {"abstract": "In this paper we describe the prototype of an archive of short movies. The project proposes two original solutions for implementing the interface of this archive: an organic metaphor and a hypervisual navigation mechanism.", "keywords": "user interfaces;metaphors;hyperlinks;hypervideo", "title": "experimenting with an organic metaphor and hypervisual links for the interface of a video collection"} {"abstract": "Drug discovery is the process of designing compounds that have desirable properties, such as activity and nontoxicity. Molecule classification techniques are used along with this process to predict the properties of the compounds to expedite their testing. Ideally, the classification rules found should be accurate and reveal novel chemical properties, but current molecule representation techniques lead to less-than-adequate accuracy and knowledge discovery. This work extends the propositionalization approach recently proposed for multirelational data mining in two ways: it generates expressive attributes exhaustively, and it uses randomization to sample a limited set of complex (\"deep\") attributes. Our experimental tests show that the procedure is able to generate meaningful and interpretable attributes from molecular structural data, and that these features are effective for classification purposes.", "keywords": "relational learning;propositionalization;molecule classification;drug discovery", "title": "A Randomized Exhaustive Propositionalization Approach for Molecule Classification"} {"abstract": "The potential gains of cooperative Communication and multi-hopping in underwater acoustic communication channels is examined. In particular, performance of such systems is compared to a comparable single hop system (direct transmission) with a common transmission distance. The effects of error propagation with decode and forward at each relay are explicitly treated and it is shown that strong gains can be achieved by multi-hopping (an effective SNR gain) is well as cooperation, which contributes to a diversity gain. We observe that cooperative diversity gains are retained even when considering error propagation. The analysis is done via a Markov chain analysis for both regular linear and grid networks. Our initial analysis is for single path channels; the effects of inter symbol interference as well as multi.-user interference are examined. It is found that due to the strong decay of signal power as a function of transmission distance, multi-user interference is not as significant as inter-symbol interference. In both cases, cooperative and multi-hopping gains are observed. ", "keywords": "underwater acoustic communications;cooperative communications;multi-hopped networks;error propagation analysis;fading multipath channels;diversity;sensor networks", "title": "Error propagation analysis for underwater cooperative multi-hop communications"} {"abstract": "The task of reasoning with fuzzy description logics with fuzzy quantification is approached by means of an evolutionary algorithm. An essential ingredient of the proposed method is a heuristic, implemented as an intelligent mutation operator, which observes the evolutionary process and uses the information gathered to guess at the mutations most likely to bring about an improvement of the solutions. The viability of the method is demonstrated by applying it to reasoning on a resource sheduling problem.", "keywords": "fuzzy quantification;fuzzy logic;evolutionary algorithms;description logics", "title": "evolutionary algorithms for reasoning in fuzzy description logics with fuzzy quantifiers"} {"abstract": "The use of deadline-based scheduling in support of real-time delivery of application data units (ADUs) in a packet-switched network is investigated. Of interest is priority scheduling where a packet with a smaller ratio of T/H (time until delivery deadline over number of hops remaining) is given a higher priority. We refer to this scheduling algorithm as the T/H algorithm. T/H has time complexity of O(logN) for a backlog of N packets and was shown to achieve good performance in terms of the percentage of ADUs that are delivered on-time. We develop a new and efficient algorithm, called T/H?p, that has O(1) time complexity. The performance difference of T/H, T/H?p and FCFS are evaluated by simulation. Implementations of T/H and T/H?p in high-speed routers are also discussed. We show through simulation that T/H?p is superior to FCFS but not as good as T/H. In view of the constant time complexity, T/H?p is a good candidate for high-speed routers when both performance and implementation cost are taken into consideration.", "keywords": "real-time data delivery;deadline-based scheduling;packet-switched networks;performance evaluation", "title": "Deadline-based scheduling in support of real-time data delivery"} {"abstract": "We show that, for an arbitrary function h(n) and each recursive function e(n), that are separated by a nondeterministically fully space constructible g(n), such that h(n) E Q(g(n)) but l(n) Q(g(n)), there exists a unary language L in NSPACE(h(n)) that is not contained in NSPACE(l(n)). The same holds for the deterministic case. The main contribution to the well-known Space Hierarchy Theorem is that (i) the language L separating the two space classes is unary (tally), (ii) the hierarchy is independent of whether h(n) or l(n) are in Omega(log n) or in o(log n), (iii) the functions h(n) or l(n) themselves need not be space constructible nor monotone increasing, (iv) the hierarchy is established both for strong and weak space complexity classes. This allows us to present unary languages in such complexity classes as, for example, NSPACE(log log n (.) log* n) NSPACE(log log n), using a plain diagonalization. ", "keywords": "computational complexity;space complexity", "title": "Space hierarchy theorem revised"} {"abstract": "Pareto-domination was adopted to handle not only trade-off between objective and constraints but also trade-off between convergence and diversity on solving a constrained optimisation problem (COP) in this paper like many other researchers. But there are some differences. This paper converts a COP into an equivalent dynamic constrained multi-objective optimisation problem (DCMOP) first, then dynamic version of non-dominated sorting genetic algorithm with decomposition (NSGA/D) is designed to solve the equivalent DCMOP, consequently solve the COP. A key issue for the NSGA/D working effectively is that the environmental change should not destroy the feasibility of the population. With a feasible population, the NSGA/D could solve well the DCMOP just as a MOEA usually can solve well an unconstrained MOP. Experimental results show that the NSGA/D outperforms or performs similarly to other state-of-the-art algorithms referred to in this paper, especially in global search.", "keywords": "evolutionary algorithm;constrained optimisation;multi-objective optimisation;dynamic multi-objective optimisation;dynamic optimisation", "title": "Non-dominated sorting genetic algorithm with decomposition to solve constrained optimisation problems"} {"abstract": "Evolutionary algorithms tend to produce solutions that are not evolvable: Although current fitness may be high, further search is impeded as the effects of mutation and crossover become increasingly detrimental. In nature, in addition to having high fitness, organisms have evolvable genomes: phenotypic variation resulting from random mutation is structured and robust. Evolvability is important because it allows the population to produce meaningful variation, leading to efficient search. However, because evolvability does not improve immediate fitness, it must be selected for indirectly. One way to establish such a selection pressure is to change the fitness function systematically. Under such conditions, evolvability emerges only if the representation allows manipulating how genotypic variation maps onto phenotypic variation and if such manipulations lead to detectable changes in fitness. This research forms a framework for understanding how fitness function and representation interact to produce evolvability. Ultimately evolvable encodings may lead to evolutionary algorithms that exhibit the structured complexity and robustness found in nature.", "keywords": "modularity;estimation-of-distribution;representations;genetic algorithms;indirect encodings;evolvability;development", "title": "selecting for evolvable representations"} {"abstract": "Increasingly, models (and modelers) are being asked to address the interactions between human influences, ecological processes, and landscape dynamics that impact many diverse aspects of managing complex coupled human and natural systems. These systems may be profoundly influenced by human decisions at multiple spatial and temporal scales, and the limitations of traditional process-level ecosystems modeling approaches for representing the richness of factors shaping landscape dynamics in these coupled systems has resulted in the need for new analysis approaches. New tools in the areas of spatial data management and analysis, multicriteria decision-making, individual-based modeling, and complexity science have all begun to impact how we approach modeling these systems. The term biocomplexity has emerged as a descriptor of the rich patterns of interactions and behaviors in human and natural systems, and the challenges of analyzing biocomplex behavior is resulting in a convergence of approaches leading to new ways of understanding these systems. Important questions related to system vulnerability and resilience, adaptation, feedback processing, cycling, non-linearities and other complex behaviors are being addressed using models employing new representational approaches to analysis. The complexity inherent in these systems challenges the modeling community to provide tools that capture sufficiently the richness of human and ecosystem processes and interactions in ways that are computationally tractable and understandable. We examine one such tool, EvoLand, which uses an actor-based approach to conduct alternative futures analyses in the Willamette Basin, Oregon.", "keywords": "complexity;resilience;adaptation;simulation", "title": "Modeling biocomplexity actors, landscapes and alternative futures"} {"abstract": "The need for automating behavioural observations and the evolution of systems developed for that purpose are outlined. Automatic video tracking systems enable behaviour to be studied in a reliable and consistent way, and over longer time periods than if it is manually recorded. To overcome limitations of currently available systems and to meet researchers' needs as these have been identified, we have developed an integrated system (EthoVision) for automatic recording of activity, movement and interactions of insects. The system is described here, with special emphasis on file management, experiment design, arena and zone definition, object detection, experiment control, visualisation of tracks and calculation of analysis parameters. A review of studies using our system is presented, to demonstrate its use in a variety of entomological applications. This includes research on beetles, fruit flies, soil insects, parasitic wasps, predatory mites, ticks, and spiders. Finally, possible future directions for development are discussed.", "keywords": "video tracking;movement analysis;behaviour recognition;ethovision", "title": "Computerised video tracking, movement analysis and behaviour recognition in insects"} {"abstract": "The intuitionistic fuzzy set, as a generation of Zadeh fuzzy set, can express and process uncertainty much better, by introducing hesitation degree. Similarity measures between intuitionistic fuzzy sets (IFSs) are used to indicate the similarity degree between the information carried by IFSs. Although several similarity measures for intuitionistic fuzzy sets have been proposed in previous studies, some of those cannot satisfy the axioms of similarity, or provide counter-intuitive cases. In this paper, we first review several widely used similarity measures and then propose new similarity measures. As the consistency of two IFSs, the proposed similarity measure is defined by the direct operation on the membership function, non-membership function, hesitation function and the upper bound of membership function of two IFS, rather than based on the distance measure or the relationship of membership and non-membership functions. It proves that the proposed similarity measures satisfy the properties of the axiomatic definition for similarity measures. Comparison between the previous similarity measures and the proposed similarity measure indicates that the proposed similarity measure does not provide any counter-intuitive cases. Moreover, it is demonstrated that the proposed similarity measure is capable of discriminating the difference between patterns.", "keywords": "intuitionistic fuzzy set;distance measure;similarity measure;pattern recognition", "title": "A novel similarity measure on intuitionistic fuzzy sets with its applications"} {"abstract": "This paper describes an object-oriented simulation approach for the design of a flexable, manufacturing system that allows the implementation of control logic during the system design phase. The object-oriented design approach is built around the formal theory of supervisory control based on Finite Automata. The formalism is used to capture inter-object relationships that are difficult to identify in the object-oriented design approach. The system resources are modeled as object classes based on the events that have to be monitored for real-time control. Real-time control issues including deadlock resolution, resource failures in various modes of operation and recovery from failures while sustaining desirable logical system properties are integrated into the logical design for simulating the supervisory controller. ", "keywords": "object-oriented simulation;flexible manufacturing systems;real-time control", "title": "An object-oriented simulation framework for real-time control of automated flexible manufacturing systems"} {"abstract": "Standard client-server workflow management systems are usually designed as client-server systems. The central server is responsible for the coordination of the workflow execution and, in some cases, may manage the activities database. This centralized control architecture may represent a single point of failure, which compromises the availability of the system. We propose a fully distributed and configurable architecture for workflow management systems. It is based on the idea that the activities of a case (an instance of the process) migrate from host to host, executing the workflow tasks, following a process plan. This core architecture is improved with the addition of other distributed components so that other requirements for Workflow Management Systems, besides scalability, are also addressed. The components of the architecture were tested in different distributed and centralized configurations. The ability to configure the location of components and the use of dynamic allocation of tasks were effective for the implementation of load balancing policies.", "keywords": "large-scale workflow management systems;fully distributed workflow architectures;corba workflow implementation;and mobile agents", "title": "A fully distributed architecture for large scale workflow enactment"} {"abstract": "Small strain consolidation theories treat soil properties as being constant and uniform in the course of consolidation, which is not true in the case of electro-osmosis-induced consolidation practices. Electro-osmotic consolidation leads to large strain, which physically and electro-chemically affects to a non-negligible extent the nonlinear changes of the soil properties. For the nonlinear changes, iterative computations provide a mathematical approximation of the soil consolidation when the time steps and spatial geometry are intensively meshed. In this context, this paper presents a finite-difference model, EC1, for one-dimensional electro-osmotic consolidation, and this model is developed based on a fixed Eulerian co-ordinate system and uses a piecewise linear approximation. The model is able to account for the large-strain-induced nonlinear changes of the physical and electro-chemical properties in a compressible mass subjected to electro-osmotic consolidation and to predict the consolidation characteristics of the compressible mass. EC1 is verified against exact analytical solutions and test results obtained from an experimental program. Example problems are illustrated with respect to the numerical solutions of large-strain electro-osmotic consolidation.", "keywords": "electro-osmosis;consolidation;large strain;nonlinear;electrical potential;pore pressure", "title": "Finite-difference model for one-dimensional electro-osmotic consolidation"} {"abstract": "This paper explores silicon CMOS on-chip spiral inductors performance degradation under high RF power. A novel methodology to calibrate and characterize on-chip spiral inductor with large signal inputs (high/medium power) is presented. Experiments showed 12% degradation of quality factor in a particular inductor design when 34dBm RF power was applied. The degradation of quality factor of inductor can be attributed to a local self heating effect. Thermal imaging of such an inductor under high RF power validates the hypothesis.", "keywords": "high rf power;on-chip inductor;quality factor", "title": "Nonlinear characteristics of on-chip spiral inductors under high RF power"} {"abstract": "A multiplicative secret sharing scheme allows players to multiply two secret-shared field elements by locally converting their shares of the two secrets into an additive sharing of their product. Multiplicative secret sharing serves as a central building block in protocols for secure multiparty computation (MPC). Motivated by open problems in the area of MPC, we introduce the more general notion of d-multiplicative secret sharing, allowing to locally multiply d shared secrets, and study the type of access structures for which such secret sharing schemes exist. While it is easy to show that d-multiplicative schemes exist if no d unauthorized sets of players cover the whole set of players, the converse direction is less obvious for da parts per thousand yen3. Our main result is a proof of this converse direction, namely that d-multiplicative schemes do not exist if the set of players is covered by d unauthorized sets. In particular, t-private d-multiplicative secret sharing among k players is possible if and only if k > dt. Our negative result holds for arbitrary (possibly inefficient or even nonlinear) secret sharing schemes and implies a limitation on the usefulness of secret sharing in the context of MPC. Its proof relies on a quantitative argument inspired by communication complexity lower bounds.", "keywords": "secret sharing;secure multiparty computation;secure multiplication", "title": "On d-Multiplicative Secret Sharing"} {"abstract": "An open question in Exact Geometric Computation is whether there re transcendental computations that can be made \"geometrically exact\".Perhaps the simplest such problem in computational geometry is that of computing the shortest obstacle-avoiding path between two points p, q in the plane, where the obstacles re collection of n discs.This problem can be solved in O (n 2 log n)time in the Real RAM model, but nothing was known about its computability in the standard (Turing) model of computation. We first show the Turing-computability of this problem,provided the radii of the discs are rationally related. We make the usual assumption that the numerical input data are real algebraic numbers. By appealing to effective bounds from transcendental number theory, we further show single-exponential time upper bound when the input numbers are rational.Our result ppears to be the first example of non-algebraic combinatorial problem which is shown computable. It is also rare example of transcendental number theory yielding positive computational results.", "keywords": "guaranteed precision computation;exponential complexity;real ram model;exact geometric computation;robust numerical algorithms;disc obstacles;shortest path", "title": "shortest path amidst disc obstacles is computable"} {"abstract": "Lists, multisets, and sets are well-known data structures whose usefulness is widely recognized in various areas of computer science. They have been analyzed from an axiomatic point of view with a parametric approach in Dovier et al. [1998], where the relevant unification algorithms have been developed. In this article, we extend these results considering more general constraints, namely, equality and membership constraints and their negative counterparts.", "keywords": "theory;algorithms;membership and equality constraints;lists;multisets;compact lists;sets", "title": "A uniform approach to constraint-solving for lists, multisets, compact lists, and sets"} {"abstract": "The competencies (a set of specific knowledges, skills, attitudes and behaviors; e.g. stress handling, commitment, collaboration and identification of conflicts) of the employees of software organizations are a fundamental element for the success of a Software Process Improvement (SPI) initiative. We performed three case studies to identify the competencies required for the stakeholders in an SPI initiative. To identify these competencies, we observed the activities that each stakeholder performs and the interactions among them. We also identified the competencies that are required to perform those activities. We performed a classification of the identified competencies and integrated them into a framework. This framework defines the competencies for seven roles involved in an SPI initiative and defines the level of expertise required by each role for each competency. To evaluate the framework, we performed ten interviews and two empirical tests. Preliminary results show that this framework is relevant in SPI initiatives, the use of this framework can raise the awareness about the competencies, and it can support some SPI activities.", "keywords": "software process improvement;skills;stakeholders;knowledge;competency framework;competencies;spi;behavior;roles", "title": "a competency framework for the stakeholders of a software process improvement initiative"} {"abstract": "In this paper, we investigate generalized remote information concentration as the reverse process of ancilla-free phase-covariant telecloning (AFPCT) which is different from the reverse process of optimal universal telecloning. It is shown that the quantum information via (1rightarrow 2) AEPCT procedure can be remotely concentrated back to a single qubit with a certain probability by utilizing (non-)maximally entangled (W) states as quantum channels. Our protocols are the generalization of Wangs scheme (Open J Microphys 3:1821. doi:10.?4236/?ojm.?2013.?31004, 2013). And von Neumann measure and positive operator-valued measurement are performed in the maximal and non-maximal cases respectively. Relatively the former, the dimension of measurement space in the latter is greatly reduced. It makes the physical realization easier and suitable.", "keywords": "ancilla-free phase-covariant teleclong;remote information concentration;three-qubit asymmetric entangled ;nonmaximally entangled state;projective measurement;povm", "title": "Remote information concentration via (W) state: reverse of ancilla-free phase-covariant telecloning"} {"abstract": "A shell finite element with transverse stress is presented in this paper in order to simulate the forming of thermoplastic composites reinforced with continuous fibres. It is shown by an experimental work that many porosities occurs through the thickness of the composite during the heating and the forming process. Consequently the reconsolidation i.e. the porosity removing by applying a compressive stress through the thickness is a main point of the process. The presented shell finite element keeps the five degrees of freedom of the standard shell elements and adds a sixth one which is the variation in thickness. A locking phenomenon is avoided by uncoupling bending and pinching in the material law. A set of classical validation tests will prove the efficiency of this approach. Finally a forming process is simulated. It shows that the computed transverse stresses are in good agreement with porosity removing in the experiments.", "keywords": "composites;forming;porosities;shell finite element;transverse stress;locking", "title": "Simulation of continuous fibre reinforced thermoplastic forming using a shell finite element with transverse stress"} {"abstract": "At the European Laboratory for High Energy Physics, CERN[1], the Large Hadron Collider (LHC)[2] accelerator is colliding beams of protons at energies of 3.5 TeV, recreating conditions close to those at the origin of the Universe. The four main LHC experiments, Alice, Atlas, CMS and LHCb are complex detectors with millions of output channels. These experiment detectors, \"large as cathedrals\", have been designed, built and are now operated by collaborations of physicists from universities and research institutes spread across the world. Wikis are a perfect match to the collaborative nature of CERN experiments and since TWiki[3] was installed at CERN in 2003 it has grown in popularity and the statistics from April 2011 show nearly 10000 registered editors and about 110000 topics (Figure 1). Since the start-up of the LHC more and more users are accessing TWiki requiring better server performance as well as finer control for read and write access and more features. This paper discusses the evolution of the use of TWiki at CERN.", "keywords": "twiki;cern;lhc", "title": "twiki a collaboration tool for the lhc"} {"abstract": "We give a brief overview of a logic-based symbolic modeling language PRISM which provides a unified approach to generative probabilistic models including Bayesian networks, hidden Markov models and probabilistic context free grammars. We include some experimental result with a probabilistic context free grammar extracted from the Penn Treebank. We also show EM learning of a probabilistic context free graph grammar as an example of exploring a new area.", "keywords": "symbolic-statistical modeling;prism;probabilistic context free grammar", "title": "A glimpse of symbolic-statistical modeling by PRISM"} {"abstract": "Management Information Systems researchers rely on longitudinal case studies to investigate a variety of phenomena such as systems development, system implementation, and information systems-related organizational change. However, insufficient attention has been spent on understanding the unique validity and reliability issues related to the timeline that is either explicitly or implicitly required in a longitudinal case study. In this paper, we address three forms of longitudinal timeline validity: time unit validity (which deals with the question of how to segment the timeline - weeks, months, years, etc.), time boundaries validity (which deals with the question of how long the timeline should be), and time period validity (which deals with the issue of which periods should be in the timeline). We also examine timeline reliability, which deals with the question of whether another judge would have assigned the same events to the same sequence, categories, and periods. Techniques to address these forms of longitudinal timeline validity include: matching the unit of time to the pace of change to address time unit validity, use of member checks and formal case study protocol to address time boundaries validity, analysis of archival data to address both time unit and time boundary validity, and the use of triangulation to address timeline reliability. The techniques should be used to design, conduct, and report longitudinal case studies that contain valid and reliable conclusions.", "keywords": "qualitative methods;longitudinal case study;timeline validity;timeline reliability", "title": "Improving validity and reliability in longitudinal case study timelines"} {"abstract": "Information privacy has been called one of the most important ethical issues of the information age. Public opinion polls show rising levels of concern about privacy among Americans. Against this backdrop, research into issues associated with information privacy is increasing. Based on a number of preliminary studies, it has become apparent that organizational practices, individuals' perceptions of these practices, and societal responses are inextricably linked in many ways. Theories regarding these relationships are slowly emerging. Unfortunately, researchers attempting to examine such relationships through confirmatory empirical approaches may be impeded by the lack of validated instruments for measuring individuals' concerns about organizational information privacy practices. To enable future studies in the information privacy research stream, we developed and validated an instrument that identifies and measures the primary dimensions of individuals' concerns about organizational information privacy practices. The development process included examinations of privacy literature; experience surveys and focus groups; and the use of expert judges. The result was a parsimonious 15-item instrument with four subscales tapping into dimensions of individuals' concerns about organizational information privacy practices. The instrument was rigorously tested and validated across several heterogenous populations, providing a high degree of confidence in the scales' validity, reliability, and generalizability.", "keywords": "privacy;lisrel;ethical issues;measures;reliability;validity", "title": "Information privacy: Measuring individuals' concerns about organizational practices"} {"abstract": "We show how to learn dynamically to adapt the number of cards in real time in token-based pull systems. We propose a Simulation-based Genetic Programming approach which does not need training sets. We illustrate how the approach can be implemented using Arena and ?GP. A reactive ConWIP example show the efficiency of the approach and of the knowledge extracted. The resulting decision tree can be used online by production managers or for self-adaptation issues.", "keywords": "kanban;conwip;manufacturing systems;reactive pull systems;self-adaptive systems;learning;simulation;genetic programming", "title": "Using genetic programming and simulation to learn how to dynamically adapt the number of cards in reactive pull systems"} {"abstract": "We describe the creation of a development framework for a platform-based design approach, in the context of the SegBus platform. The work intends to provide automated procedures for platform build-up and application mapping. The solution is based on a model-based process and heavily employs the UML. We develop a Domain Specific Language to support the platform modeling. An emulator is consequently introduced to allow an as much as possible accurate performance estimation of the solution, at high abstraction levels. Automated execution schedule generation is also featured. The resulting framework is applied to build actual design solutions for a MP3-decoder application.", "keywords": "model-based engineering;domain-specific languages;system emulation;code generation;system-on-chip", "title": "A development and verification framework for the SegBus platform"} {"abstract": "This paper introduces a neuro-fuzzy controller (NFC) for the speed control of a PMSM. A four layer neural network (NN) is used to adjust input and output parameters of membership functions in a fuzzy logic controller (FLC). The back propagation learning algorithm is used for training this network. The performance of the proposed controller is verified by both simulations and experiments. The hardware implementation of the controllers is made using a TMS320F240 DSP. The results are compared with the results obtain from a Proportional+Integral (PI) controller. Simulation and experimental results indicate that the proposed NFC is reliable and effective for the speed control of the PMSM over a wide range of operations of the PMSM drive.", "keywords": "fuzzy logic control;neural networks;permanent magnet synchronous motor drive", "title": "A neuro-fuzzy controller for speed control of a permanent magnet synchronous motor drive"} {"abstract": "The suffix tree of a string is the fundamental data structure of combinatorial pattern matching. We present a recursive technique for building suffix trees that yields optimal algorithms in different computational models. Sorting is an inherent bottleneck in building suffix trees and our algorithms match the sorting lower bound. Specifically, we present the following results. (1) Weiner [1973], who introduced the data structure, gave an optimal O(n)-time algorithm for building the suffix tree of an n-character string drawn from a constant-size alphabet. In the comparison model, there is a trivial n(n log n)-time lower bound based on sorting, and Weiner's algorithm matches this bound. For integer alphabets, the fastest known algorithm is the O(n log n) time comparison-based algorithm, but no super-linear lower bound is known. Closing this gap is the main open question in stringology. We settle this open problem by giving a linear time reduction to sorting for building suffix trees. Since sorting is a lower-bound for building suffix trees, this algorithm is time-optimal in every alphabet model, in particular, for an alphabet consisting of integers in a polynomial range we get the first known linear-time algorithm. (2) All previously known algorithms for building suffix trees exhibit a marked absence of locality of reference, and thus they tend to elicit many page faults (I/Os) when indexing very long strings. They are therefore unsuitable for building suffix trees in secondary storage devices, where I/Os dominate the overall computational cost. We give a linear-I/O reduction to sorting for suffix tree construction. Since sorting is a trivial I/O-lower bound for building suffix trees, our algorithm is I/O-optimal.", "keywords": "algorithms;design;theory;dam model;external-memory data structures;ram model;sorting complexity;suffix array;suffix tree", "title": "On the sorting-complexity of suffix tree construction"} {"abstract": "We begin by observing that (discrete-time) Quasi-BirthDeath Processes (QBDs) are equivalent, in a precise sense, to probabilistic 1-Counter Automata (p1CAs), and both Tree-Like QBDs (TL-QBDs) and Tree-Structured QBDs (TS-QBDs) are equivalent to both probabilistic Pushdown Systems (pPDSs) and Recursive Markov Chains (RMCs). We then proceed to exploit these connections to obtain a number of new algorithmic upper and lower bounds for central computational problems about these models. Our main result is this: for an arbitrary QBD, we can approximate its termination probabilities (i.e.,its G matrix) to within i bits of precision (i.e.,within additive error 1/2i 1 / 2 i ), in time polynomial in both the encoding size of the QBD and in i , in the unit-cost rational arithmetic RAM model of computation. Specifically, we show that a decomposed Newtons method can be used to achieve this. We emphasize that this bound is very different from the well-known linear/quadratic convergence of numerical analysis, known for QBDs and TL-QBDs, which typically gives no constructive bound in terms of the encoding size of the system being solved. In fact, we observe (based on recent results) that for the more general TL-QBDs such a polynomial upper bound on Newtons method fails badly. Our upper bound proof for QBDs combines several ingredients: a detailed analysis of the structure of 1-Counter Automata, an iterative application of a classic condition number bound for errors in linear systems, and a very recent constructive bound on the performance of Newtons method for strongly connected monotone systems of polynomial equations. We show that the quantitative termination decision problem for QBDs (namely, is Gu,v?1/2 G u , v ? 1 / 2 ?) is at least as hard as long-standing open problems in the complexity of exact numerical computation, specifically the square-root sum problem. On the other hand, it follows from our earlier results for RMCs that any non-trivial approximation of termination probabilities for TL-QBDs is sqrt-root-sum-hard.", "keywords": "quasi-birthdeath processes;tree-like qbds;probabilistic 1-counter automata;pushdown systems;newtons method", "title": "Quasi-BirthDeath Processes, Tree-Like QBDs, Probabilistic 1-Counter Automata, and Pushdown Systems"} {"abstract": "The Support Vector Machine is an acknowledged powerful tool for building classifiers, but it lacks flexibility, in the sense that the kernel is chosen prior to learning. Multiple Kernel Learning enables to learn the kernel, from an ensemble of basis kernels, whose combination is optimized in the learning process. Here, we propose Composite Kernel Learning to address the situation where distinct components give rise to a group structure among kernels. Our formulation of the learning problem encompasses several setups, putting more or less emphasis on the group structure. We characterize the convexity of the learning problem, and provide a general wrapper algorithm for computing solutions. Finally, we illustrate the behavior of our method on multi-channel data where groups correspond to channels.", "keywords": "supervized learning;support vector machine;kernel learning;structured kernels;feature selection and sparsity", "title": "Composite kernel learning"} {"abstract": "A general goal concerning fundamental linear algebra problems is to reduce the complexity estimates to essentially the same as that of multiplying two matrices (plus possibly a cost related to the input and output sizes). Among the bottlenecks one usually finds the questions of designing a recursive approach and mastering the sizes of the intermediately computed data. In this talk we are interested in two special cases of lattice basis reduction. We consider bases given by square matrices over K[x] or Z, with, respectively, the notion of reduced form and LLL reduction. Our purpose is to introduce basic tools for understanding how to generalize the Lehmer and Knuth-Schnhage gcd algorithms for basis reduction. Over K[x] this generalization is a key ingredient for giving a basis reduction algorithm whose complexity estimate is essentially that of multiplying two polynomial matrices. Such a problem relation between integer basis reduction and integer matrix multiplication is not known. The topic receives a lot of attention, and recent results on the subject show that there might be room for progressing on the question.", "keywords": "polynomial matrix;matrix reduction;lll basis reduction;euclidean lattice", "title": "recent progress in linear algebra and lattice basis reduction"} {"abstract": "Contacts between mobile users provide opportunities for data updates that supplement infrastructure-based mechanisms. While the benefits of such opportunistic sharing are intuitive, quantifying the capacity increase they give rise to is challenging because both contact rates and contact graphs depend on the structure of the social networks users belong to. Furthermore, social connectivity influences not only users' interests, i.e., the content they own, but also their willingness to share data with others. All these factors can have a significant effect on the capacity gains achievable through opportunistic contacts. This paper's main contribution is in developing a tractable model for estimating such gains in a content update system, where content originates from a server along multiple channels, with blocks of information in each channel updated at a certain rate, and users differ in their contact graphs, interests, and willingness to share content, e.g., only to the members of their own social networks. We establish that the added capacity available to improve content consistency through opportunistic sharing can be obtained by solving a convex optimization problem. The resulting optimal policy is evaluated using traces reflecting contact graphs in different social settings and compared to heuristic policies. The evaluation demonstrates the capacity gains achievable through opportunistic sharing, and the impact on those gains of the structure of the underlying social network.", "keywords": "consistency;optimization;dynamic content;dissemination;delay-tolerant networks;social networks", "title": "quantifying content consistency improvements through opportunistic contacts"} {"abstract": "The performance of a MEMS (Micro Electro-Mechanical Systems) Sensor in a RFID system has been calculated, simulated and analyzed. It documents the viability from the power consumption point of view- of integrating a MEMS sensor in a passive tag maintaining its long range. The wide variety of sensors let us specify as many applications as the imagination is able to create. The sensor tag works without battery, and it is remotely powered through a commercial reader accomplishing the EPC standard Class 1 Gen 2. The key point is the integration in the tag of a very low power consumption pressure MEMS sensor. The power consumption of the sensor is 12.5 mu W. The specifically developed RFID CMOS passive module, with an integrated temperature sensor, is able to communicate up to 2.4 meters. Adding the pressure MEMS sensor - an input capacity, a maximum range of 2 meters can be achieved between the RFID sensor tag and a commercial reader (typical reported range for passive pressure sensors are in the range of a few centimeters). The RFID module has been fabricated with a CMOS process compatible with a bulk micromachining MEMS process. So, the feasibility of a single chip is presented.", "keywords": "radiofrequency identification;sensor systems;low power electronics;wireless sensor networks", "title": "Study of the communication distance of a MEMS Pressure Sensor Integrated in a RFID Passive Tag"} {"abstract": "In this paper we present a novel hardware architecture for real-time image compression implementing a fast, searchless iterated function system (SIFS) fractal coding method. In the proposed method and corresponding hardware architecture, domain blocks are fixed to a spatially neighboring area of range blocks in a manner similar to that given by Furao and Hasegawa. A quadtree structure, covering from 3232 blocks down to 22 blocks, and even to single pixels, is used for partitioning. Coding of 22 blocks and single pixels is unique among current fractal coders. The hardware architecture contains units for domain construction, zig-zag transforms, range and domain mean computation, and a parallel domain-range match capable of concurrently generating a fractal code for all quadtree levels. With this efficient, parallel hardware architecture, the fractal encoding speed is improved dramatically. Additionally, attained compression performance remains comparable to traditional search-based and other searchless methods. Experimental results, with the proposed hardware architecture implemented on an Altera APEX20K FPGA, show that the fractal encoder can encode a 5125128 image in approximately 8.36ms operating at 32.05MHz. Therefore, this architecture is seen as a feasible solution to real-time fractal image compression.", "keywords": "fractal image encoding;quadtree;searchless;real-time image compression", "title": "A hardware architecture for real-time image compression using a searchless fractal image coding method"} {"abstract": "The view of a node in a port-labeled network is an infinite tree encoding all walks in the network originating from this node. We prove that for any integers n?D?1 n ? D ? 1 , there exists a port-labeled network with at most n nodes and diameter at most D which contains a pair of nodes whose (infinite) views are different, but whose views truncated to depth ?(Dlog?(n/D)) ? ( D log ? ( n / D ) ) are identical.", "keywords": "anonymous network;port-labeled network;view;quotient graph", "title": "Distinguishing views in symmetric networks: A tight lower bound"} {"abstract": "For two decades, the memory wall has affected many applications in their ability to benefit from improvements in processor speed. Cache injection addresses this disparity for I/O by writing data into a processor's cache directly from the I/O bus. This technique reduces data latency and, unlike data prefetching, improves memory bandwidth utilization. These improvements are significant for data-intensive applications whose performance is dominated by compulsory cache misses. We present an empirical evaluation of three injection policies and their effect on the performance of two parallel applications and several collective micro-benchmarks. We demonstrate that the effectiveness of cache injection on performance is a function of the communication characteristics of applications, the injection policy, the target cache, and the severity of the memory wall. For example, we show that injecting message payloads to the L3 cache can improve the performance of network-bandwidth limited applications. In addition, we show that cache injection improves the performance of several collective operations, but not all-to-all operations (implementation dependent). Our study shows negligible pollution to the target caches.", "keywords": "memory wall;cache injection", "title": "cache injection for parallel applications"} {"abstract": "In a key management scheme for hierarchy based access control, each security class having higher clearance can derive the cryptographic secret keys of its other security classes having lower clearances. In 2008, Chung et al. proposed an efficient scheme on access control in user hierarchy based on elliptic curve cryptosystem [Information Sciences 178 (1) (2008) 230243]. Their scheme provides solution of key management efficiently for dynamic access problems. However, in this paper, we propose an attack on Chung et al.s scheme to show that Chung et al.s scheme is insecure against the exterior root finding attack. We show that under this attack, an attacker (adversary) who is not a user in any security class in a user hierarchy attempts to derive the secret key of a security class by using the root finding algorithm. In order to remedy this attack, we further propose a simple improvement on Chung et al.s scheme. Overall, the main theme of this paper is very simple: a security flaw is presented on Chung et al.s scheme and then a fix is provided in order to remedy the security flaw found in Chung et al.s scheme.", "keywords": "key management;elliptic curve;hierarchical access control;polynomial interpolation;security;exterior root finding attacks", "title": "Cryptanalysis and improvement of an access control in user hierarchy based on elliptic curve cryptosystem"} {"abstract": "Molecular modeling and docking studies along with three-dimensional quantitative structure relationships (3D-QSAR) studies have been used to determine the correct binding mode of glycogen synthase kinase 3 beta (GSK-3 beta) inhibitors. The approaches of comparative molecular field analysis (CoMFA) and comparative molecular similarity index analysis (CoMSIA) are used for the 3D-QSAR of 51 substituted benzofuran-3-yl-(indol-3-yl)maleimides as GSK-3 beta inhibitors. Two binding modes of the inhibitors to the binding site of GSK-3 beta are investigated. The binding mode 1 yielded better 3D-QSAR correlations using both CoMFA and CoMSIA methodologies. The three-component CoMFA model from the steric and electrostatic fields for the experimentally determined pIC(50) values has the following statistics: R(2)(cv) = 0.386 nd SE(cv) = 0.854 for the cross-validation, and R(2) = 0.811 and SE = 0.474 for the fitted correlation. F (3,47) = 67.034, and probability of R(2) = 0 (3,47) = 0.000. The binding mode suggested by the results of this study is consistent with the preliminary results of X-ray crystal structures of inhibitor-bound GSK-3 beta. The 3D-QSAR models were used for the estimation of the inhibitory potency of two additional compounds.", "keywords": "benzofuran-3-yl- maleimides;binding mode;comfa;comsia;docking;gsk-3beta inhibitors;3d-qsar;x-ray", "title": "Use of molecular modeling, docking, and 3D-QSAR studies for the determination of the binding mode of benzofuran-3-yl-(indol-3-yl)maleimides as GSK-3 beta inhibitors"} {"abstract": "In the earlier paper [6], a Galerkin method was proposed and analyzed for the numerical solution of a Dirichlet problem for a semi-linear elliptic boundary value problem of the form -Delta U = F((.),U). This was converted to a problem on a standard domain and then converted to an equivalent integral equation. Galerkina's method was used to solve the integral equation, with the eigenfunctions of the Laplacian operator on the standard domain D as the basis functions. In this paper we consider the implementing of this scheme, and we illustrate it for some standard domains D.", "keywords": "elliptic;nonlinear;integral equation;galerkin method", "title": "On the numerical solution of some semilinear elliptic problems - II"} {"abstract": "Logistics is a very dynamic and heterogeneous application area which generates complex requirements regarding the development of information and communication technologies (ICT). For this area, it is a challenge to support mobile workers on-site in an unobtrusive manner. In this contribution, wearable computing technologies are investigated as basis for a \"mobile worker supporting system\" for tasks at an automobile terminal. The features of wearable computing technologies are checked against the requirements of the application area to come to an usable and acceptable mobile solution in an user-centred design process.", "keywords": "mobile usability;logistics;wearable computing;mobile work processes;autonomous control;user-centred design", "title": "supporting mobile work processes in logistics with wearable computing"} {"abstract": "Simulation of granular media undergoing dynamic evolution involves nonsmooth problems when grains are modeled as rigid bodies. With dense samples, this nonsmoothness occurs everywhere in the studied domain, and large sized systems lead to computationally intensive simulations. In this article, we combine domain decomposition approaches and nonsmooth contact dynamics. Unlike the smooth continuum media case, a coarse space problem does not trivially increase the convergence rate, as it is exemplified in this article, with semi-analytical examples and real size numerical simulations. Nevertheless, the description of an underlying force network in the samples may guide the analysis for new approximation schemes or algorithms.", "keywords": "nonsmooth contact dynamics;multicontact systems;scalability;multiscale;asymptotic analysis", "title": "A nonlinear domain decomposition formulation with application to granular dynamics"} {"abstract": "Two algorithms for minimum cut linear arrangement of a class of graphs called p-q dags are proposed. A p-q dag represents the connection scheme of an adder tree, such as Wallace tree, and the VLSI layout problem of a bit slice of an adder tree is treated as the minimum cut linear arrangement problem of its corresponding p-q dag. One of the two algorithms is based on dynamic programming. It calculates an exact minimum solution within n(O(1)) time and space, where n is the size of a given graph. The other algorithm is an approximation algorithm which calculates a solution with O(log n) cutwidth. It requires O(n log n) time.", "keywords": "graph algorithm;minimum cut linear arrangement;vlsi layout;adder tree;multiplier", "title": "Minimum cut linear arrangement of p-q dags for VLSI layout of adder trees"} {"abstract": "This paper presents a comparative research of nine different combinations of imaging environmental factors using orthogonal test approach to gain optimal illumination in an image acquisition device which was self-designed. The effect of four different environmental factors such as shoot distance, lamp number, lamp height, lamp side distance have been investigated on the key parameters. Experimental results based on L(9)(3)(4) orthogonal test design shows that under different combination of environmental factors, there are obvious differences between illumination intensity and illumination uniformity of images and which are mainly affected by the shoot distance and lamp number. Based on these experiments, we get two preferable combinations. Attention is concentrated on finding the best. Through further analysis and discussion, the best combination is identified. Our experimental results indicate that orthogonal test here is very suitable for gaining optimal environmental factors.", "keywords": "image acquisition;diffuse reflection;orthogonal test;illumination intensity;illumination uniformity", "title": "OPTIMIZATION OF ILLUMINATION ENVIRONMENTAL FACTORS BASED ON ORTHOGONAL TEST"} {"abstract": "Neisseria meningitidis serogroup B is predominantly known for its leading role in bacterial meningitis and septicemia worldwide. Although, polysaccharide conjugate vaccines have been developed and used successfully against many of the serogroups of N. meningitidis, such strategy has proved ineffective against group B meningococci. Here, we proposed to develop peptide epitope-based vaccine candidates from outer membrane (OM) protein contained in the outer membrane vesicles (OMV) based on our in silico analysis. In OMV, a total of 236 proteins were identified, only 15 (6.4%) of which were predicted to be located in the outer membrane. For the preparation of specific monoclonal antibodies against pathogenic bacterial protein, identification and selection of B cell epitopes that act as a vaccine target are required. We selected 13 outer membrane proteins from OMV proteins while taking into consideration the removal of cross-reactivity. Epitopia web server was used for the prediction of B cell epitopes. Epitopes are distinguished from non-epitopes by properties such as amino acid preference on the basis of amino acid composition, secondary structure composition, and evolutionary conservation. Predicted results were subject to verification with experimental data and we performed string-based search through IEDB. Our finding shows that epitopes have general preference for charged and polar amino acids; epitopes are enriched with loop as a secondary structure element that renders them flexible and also exposes another view of antibodyantigen interaction.", "keywords": "b cell epitopes;meningococcal;neisseria meningitides;structural features;vaccine candidates", "title": "Linear B cell epitope prediction for epitope vaccine design against meningococcal disease and their computational validations through physicochemical properties"} {"abstract": "This paper addresses the inference of probabilistic classification models using weakly supervised learning. The main contribution of this work is the development of learning methods for training datasets consisting of groups of objects with known relative class priors. This can be regarded as a generalization of the situation addressed by Bishop and Ulusoy (2005), where training information is given as the presence or absence of object classes in each set. Generative and discriminative classification methods are conceived and compared for weakly supervised learning, as well as a non-linear version of the probabilistic discriminative models. The considered models are evaluated on standard datasets and an application to fisheries acoustics is reported. The proposed proportion-based training is demonstrated to outperform model learning based on presence/absence information and the potential of the non-linear discriminative model is shown.", "keywords": "weakly supervised learning;generative classification model;discriminative classification model", "title": "Object recognition using proportion-based prior information: Application to fisheries acoustics"} {"abstract": "In this paper, Hes frequencyamplitude formulation is applied to determine the periodic solution for a nonlinear oscillator system with an irrational force. Comparison with the exact solution shows that the result obtained is of high accuracy.", "keywords": "nonlinear oscillators;hes frequency formulation;periodic solution", "title": "Hes frequencyamplitude formulation for nonlinear oscillators with an irrational force"} {"abstract": "An intervention study was conducted to examine the effectiveness of an innovative self-modeling photo-training method for reducing musculoskeletal risk among office workers using computers. Sixty workers were randomly assigned to either: 1) a control group; 2) an office training group that received personal, ergonomic training and workstation adjustments or 3) a photo-training group that received both office training and an automatic frequent-feedback system that displayed on the computer screen a photo of the workers current sitting posture together with the correct posture photo taken earlier during office training. Musculoskeletal risk was evaluated using the Rapid Upper Limb Assessment (RULA) method before, during and after the six weeks intervention. Both training methods provided effective short-term posture improvement; however, sustained improvement was only attained with the photo-training method. Both interventions had a greater effect on older workers and on workers suffering more musculoskeletal pain. The photo-training method had a greater positive effect on women than on men.", "keywords": "occupational exposure;ergonomics;telemedicine;feedback;task performance and analysis;algorithm;posture", "title": "The effectiveness of a training method using self-modeling webcam photos for reducing musculoskeletal risk among office workers using computers"} {"abstract": "A number of investigators have reported that distance judgments in virtual environments (VEs) are systematically smaller than distance judgments made in comparably-sized real environments. Many variables that may contribute to this difference have been investigated but none of them fully explain the distance compression. One approach to this problem that has implications for both VE applications and the study of perceptual mechanisms is to examine the influence of the feedback available to the user. Most generally, we asked whether feedback within a virtual environment would lead to more accurate estimations of distance. Next, given the prediction that some change in behavior would be observed, we asked whether specific adaptation effects would generalize to other indications of distance. Finally, we asked whether these effects would transfer from the VE to the real world. All distance judgments in the head-mounted display (HMD) became near accurate after three different forms of feedback were given within the HMD. However, not all feedback sessions within the HMD altered real world distance judgments. These results are discussed with respect to the perceptual and cognitive mechanisms that may be involved in the observed adaptation effects as well as the benefits of feedback for VE applications.", "keywords": "adaptation;virtual environments;space perception;feedback", "title": "the influence of feedback on egocentric distance judgments in real and virtual environments"} {"abstract": "According to the taxonomy for agent activity, proposed by V. Parunak, a collaboration is an interaction between agents of a multi-agent system (MAS) whereby the agents explicitly coordinate their actions before they cooperate. We discuss two sub-types of collaboration in the context of situated MASs, namely asynchronous and synchronous collaboration. After setting up collaboration, the interaction between the agents in an asynchronous collaboration happens indirectly through the environment. Agents direct their actions via the perceived state change of their environment. On the other hand, during a synchronous collaboration agents have to act simultaneously and this requires an additional agreement about which actions should be executed. Although they both fit the characteristics of collaboration, the requirements for their implementation is quite different. Whereas agents in an asynchronous collaboration can be implemented as separate processes that act directly into the environment, the implementation of synchronous collaboration is more complex since it requires support for simultaneous actions. In the paper we give examples of both kinds of collaborations and outline the necessary support for their implementation.", "keywords": "synchronization;collaboration;interaction", "title": "synchronous versus asynchronous collaboration in situated multi-agent systems"} {"abstract": "In Rouached et al. (2006) and Rouached and Godart (2007) the authors described the semantics of WSBPEL by way of mapping each of the WSBPEL (Arkin et al., 2004) constructs to the EC algebra and building a model of the process behaviour. With these mapping rules, the authors describe a modelling approach of a process defined for a single Web service composition. However, this modelling is limited to a local view and can only be used to model the behaviour of a single process. The authors further the semantic mapping to include Web service composition interactions through modelling Web service conversations and their choreography. This paper elaborates the models to support a view of interacting Web service compositions extending the mapping from WSBPEL to EC, and including Web service interfaces (WSDL) for use in modelling between services. The verification and validation techniques are also exposed while automated induction-based theorem prover is used as verification back-end.", "keywords": "choreography;orchestration;semantic mapping;verification and validation;web service composition", "title": "Web Services Compositions Modelling and Choreographies Analysis"} {"abstract": "Perspective-n-Point camera pose determination, or the PnP problem, has attracted much attention in the literature. This paper gives a systematic investigation on the PnP problem from both geometric and algebraic standpoints, and has the following contributions: Firstly, we rigorously prove that the PnP problem under distance-based definition is equivalent to the PnP problem under orthogonal-transformation-based definition when n > 3, and equivalent to the PnP problem under rotation-transformation-based definition when n = 3. Secondly, we obtain the upper bounds of the number of solutions for the PnP problem under different definitions. In particular, we show that for any three non-collinear control points, we can always find out a location of optical center such that the P3P problem formed by these three control points and the optical center can have 4 solutions, its upper bound. Additionally a geometric way is provided to construct these 4 solutions. Thirdly, we introduce a depth-ratio based approach to represent the solutions of the whole PnP problem. This approach is shown to be advantageous over the traditional elimination techniques. Lastly, degenerated cases for coplanar or collinear control points are also discussed. Surprisingly enough, it is shown that if all the control points are collinear, the PnP problem under distance-based definition has a unique solution, but the PnP problem under transformation-based definition is only determined up to one free parameter.", "keywords": "perspective-n-point camera pose determination;distance-based definition;transformation-based definition;depth-ratio based equation;upper bound of the number of solutions", "title": "PnP problem revisited"} {"abstract": "We introduce an easy and intuitive approach to create animations by assembling existing animations. Using our system, the user needs only to simply scribble regions of interest and select the example animations that he/she wants to apply. Our system will then synthesize a transformation for each triangle and solve an optimization problem to compute the new animation for this target mesh. Like playing a jigsaw puzzle game, even a novice can explore his/her creativity by using our system without learning complicated routines, but just using a few simple operations to achieve the goal.", "keywords": "animation synthesis;warping;intelligent scribbling", "title": "Example-driven animation synthesis"} {"abstract": "One fast inter mode decision algorithm is proposed in this paper. The whole algorithm is divided into two stages. In the pre-stage, by exploiting spatial and temporal information of encoded macrobocks (MBs), a skip mode early detection scheme is proposed. The homogeneity of current MB is also analyzed to filter out small inter modes in this stage. Secondly, during the block matching stage, a motion feature based inter mode decision scheme is introduced by analyzing the motion vector predictor's accuracy, the block overlapping situation and the smoothness of SAD (sum of absolute difference) value. Moreover, the rate distortion cost is checked in an early stage and we set some constraints to speed up the whole decision flow. Experiments show that our algorithm can achieve a speed up factor of up to 53.4% for sequences with different motion type. The overall bit increment and quality degradation is negligible compared with existing works.", "keywords": "mode decision;h.264/avc;feature analysis", "title": "Macroblock and Motion Feature Analysis to H.264/AVC Fast Inter Mode Decision"} {"abstract": "An incremental language processor is one that accepts as input a sequence of substrings of the source language and maps them independently onto fragments in some object code. The ordered sequence of these object code fragments are then either compiled, in which case we have an incremental compiler, or interpreted. In the first case the advantage resulting is that subsequent changes in the source program entail only reprocessing the source fragments affected and recompiling the updated collection of object code fragments. In an environment where small changes are made frequently to large programs, e.g. debugging, the curtailment of reprocessing is attractive. In the second case the object code fragments are the actual run-time program representation, and hence inter-fragment relations are transiently evaluated as needed in the process of execution, with no long-term preservation of these relationships beyond the scope of their immediate need in execution time. This permits the possibility of program recomposition in the midst of execution, one of the principal characteristics of conversational computing. Many conversational language processors execute a program representation functionally analogous to parse trees, i.e. the syntax analysis of a fragment, insofar as it is possible, is done at fragment load time. This representation choice is popular because many of the expensive aspects of interpretation, including character string scanning, symbol table lookup, and parsing, are performed once only and do not contribute to the execution overhead. This paper is devoted to examining the question of the construction of such a parser in a general manner for an arbitrary source language.", "keywords": "parser;input;analysis;maps;design;collect;computation;object;trees;case;general;timing;paper;representation;scan;strings;change;incremental;fragmentation;processor;program;character;aspect;preservation;relationships;environments;interpretation;code;language;process;compilation;sequence;debugging;parsing", "title": "the design of parsers for incremental language processors"} {"abstract": "This paper presents a new algorithm for identifying all supported non-dominated vectors (or outcomes) in the objective space, as well as the corresponding efficient solutions in the decision space, for multi-objective integer network flow problems. Identifying the set of supported non-dominated vectors is of the utmost importance for obtaining a first approximation of the whole set of non-dominated vectors. This approximation is crucial, for example, in two-phase methods that first compute the supported non-dominated vectors and then the unsupported non-dominated ones. Our approach is based on a negative-cycle algorithm used in single objective minimum cost flow problems, applied to a sequence of parametric problems. The proposed approach uses the connectedness property of the set of supported non-dominated vectors/efficient solutions to find all integer solutions in maximal non-dominated/efficient facets.", "keywords": "multi-objective linear and integer programming;multi-objective network flows;negative-cycle algorithms;parametric programming", "title": "On the computation of all supported efficient solutions in multi-objective integer network flow problems"} {"abstract": "In real optimization we always meet two main groups of criteria: requirements of useful outcomes increasing or expenses decreasing and demands of lower uncertainty or, in other words, risk minimization. Therefore, it seems advisable to formulate optimization problem under conditions of uncertainty, at least, two-objective on the basis of local criteria of outcomes increasing or expenses reduction and risk minimization. Generally, risk may be treated as the uncertainty of obtained result. In the considered situation, the degree of risk (uncertainty) may be defined in a natural way through the width of final interval objective function at the point of optimum achieved. To solve the given problem, the two-objective interval comparison technique has been developed taking into account the probability of supremacy of one interval over the other one and relation of compared widths of intervals. To illustrate the efficiency of the proposed method, the simple examples of minimization of interval double-extreme discontinuous cost function and fuzzy extension of Rosenbrock's test function are presented.", "keywords": "crisp interval;fuzzy interval;interval comparison;probabilistic approach;optimization", "title": "Two-objective method for crisp and fuzzy interval comparison in optimization"} {"abstract": "This paper proposes a technique for the detection of head nod and shake gestures based on eye tracking and head motion decision. The eye tracking step is divided into face detection and eye location. Here, we apply a motion segmentation algorithm that examines differences in moving peoples faces. This system utilizes a Hidden Markov Model-based head detection module that carries out complete detection in the input images, followed by the eye tracking module that refines the search based on a candidate list provided by the preprocessing module. The novelty of this paper is derived from differences in real-time input images, preprocessing to remove noises (morphological operators and so on), detecting edge lines and restoration, finding the face area, and cutting the head candidate. Moreover, we adopt a K-means algorithm for finding the head region. Real-time eye tracking extracts the location of eyes from the detected face region and is performed at close to a pair of eyes. After eye tracking, the coordinates of the detected eyes are transformed into a normalized vector of x-coordinate and y-coordinate. Head nod and shake detector uses three hidden Markov models (HMMs). HMM representation of the head detection can estimate the underlying HMM states from a sequence of face images. Head nod and shake can be detected by three HMMs that are adapted by a directional vector. The directional vector represents the direction of the head movement. The vector is HMMs for determining neutral as well as head nod and shake. These techniques are implemented on images, and notable success is notified.", "keywords": "head detection;head location;eye location;hidden markov models", "title": "Development of head detection and tracking systems for visual surveillance"} {"abstract": "The complexity of printed circuit boards (PCBs), as an important sector of the electronics manufacturing industry, has increased over the last three decades. This paper focuses on a practical application observed at a PCB assembly line of electronics manufacturing facility. It is shown that this problem is equivalent to a flowshop scheduling with multiple heterogeneous batch processors where processors can perform multiple tasks as long as the sizes of jobs in a batch do not violate the processors capacity. The equivalent problem is mathematically formulated as a mixed integer programming model. Then, a Monte Carlo simulation is incorporated into high-level genetic algorithm-based intelligent optimization techniques to assess the performance of makespan-oriented system under uncertain processing times. At each iteration of algorithm, the output of simulator is used by optimizers to provide online feedbacks on the progress of the search and direct the search toward a promising solution zone. Furthermore, various parameters and operators of the algorithm are discussed and calibrated by means of Taguchi statistical technique. The result of extensive computational experiments shows that the solution approach gives high-quality solutions in reasonable computational time.", "keywords": "electronics manufacturing;pcb assembly;monte carlo simulation;genetic algorithms", "title": "Scheduling of printed circuit board (PCB) assembly systems with heterogeneous processors using simulation-based intelligent optimization methods"} {"abstract": "This paper introduces a new multi-agent model for intelligent agents, called reinforcement learning hierarchical neuro-fuzzy multi-agent system. This class of model uses a hierarchical partitioning of the input space with a reinforcement learning algorithm to overcome limitations of previous RL methods. The main contribution of the new system is to provide a flexible and generic model for multi-agent environments. The proposed generic model can be used in several applications, including competitive and cooperative problems, with the autonomous capacity to create fuzzy rules and expand their own rule structures, extracting knowledge from the direct interaction between the agents and the environment, without any use of supervised algorithms. The proposed model was tested in three different case studies, with promising results. The tests demonstrated that the developed system attained good capacity of convergence and coordination among the autonomous intelligent agents.", "keywords": "multi-agent systems ;hierarchical neuro-fuzzy;intelligent agents;reinforcement learning", "title": "Multi-agent systems with reinforcement hierarchical neuro-fuzzy models"} {"abstract": "In this paper, it is assumed that the rates of return on assets can be expressed by possibility distributions rather than probability distributions. We propose two kinds of portfolio selection models based on lower and upper possibilistic means and possibilistic variances, respectively, and introduce the notions of lower and upper possibilistic efficient portfolios. We also present an algorithm which can derive the explicit expression of the possibilistic efficient frontier for the possibilistic mean-variance portfolio selection problem dealing with lower bounds on asset holdings.", "keywords": "possibility theory;possibilistic mean;possibilistic variance;portfolio selection;optimization", "title": "Possibilistic meanvariance models and efficient frontiers for portfolio selection problem"} {"abstract": "This study explores possible leadership perceptions of Millennials working in academic libraries, specifically their definition, the attributes they associate with leadership, whether they want to assume formal leadership roles, whether they perceive themselves as leaders, and whether they perceive leadership opportunities within their organizations and LIS professional associations. An online survey was utilized to gather the responses and the study participants comprised of Millennials (born 1982 or after) currently working full-time in libraries that were a member of the Committee on Institutional Cooperation (CIC), a consortium of the Big Ten universities and the University of Chicago in 201112.", "keywords": "leadership;millennials;academic;leaders;perceptions;management", "title": "Millennials among the Professional Workforce in Academic Libraries: Their Perspective on Leadership"} {"abstract": "Three-dimensional mesh fusion provides an easy and fast way to create new mesh models from existing ones. We introduce a novel approach of mesh fusion in this paper based on functional blending. Our method has no restriction of disk-like topology or one-ring opening on the meshes to be merged. First of all, sections with boundaries of the under-fusing meshes are converted into implicit representations. An implicit transition surface, which joins the sections together while keeping smoothness at the boundaries, is then created based on cubic Hermite functional blending. Finally, the implicit surface is tessellated to form the resultant mesh. Our scheme is both efficient and simple, and with it users can easily construct interesting, complex 3D models.", "keywords": "mesh fusion;functional blending;interactive modeling tool", "title": "Mesh fusion using functional blending on topologically incompatible sections"} {"abstract": "This study integrates ground spectrometry, imaging spectrometry, and in situ pavement condition surveys for asphalt road assessment. Field spectra showed that asphalt aging and deterioration produce measurable changes in spectra as these surfaces undergo a transition from hydrocarbon dominated new roads to mineral dominated older roads. Several spectral measures derived from field and image spectra correlated well with pavement quality indicators. Spectral confusion between pavement material aging and asphalt mix erosion on the one hand, and structural road damages (e.g. cracking) on the other, poses some limits to remote sensing based mapping. Both the common practice methods (Pavement management system-PMS, in situ vehicle inspections), and analysis using imaging spectrometry are effective in identifying roads in good and very good condition. Variance and uncertainty in all survey data (PMS, in situ vehicle inspections, remote sensing) increases for road surfaces in poor condition and clear determination of specific (and expensive) surface treatment decisions remains problematic from these methods.", "keywords": "asphalt road survey;imaging spectrometry;pavement management;spectral library;remote sensing;hyperspectral", "title": "Imaging spectrometry and asphalt road surveys"} {"abstract": "Past research has shown the variations of students' conceptions of learning, but little has been especially undertaken to address students' conceptions of web-based learning and to make comparisons between students' conceptions of learning in general and their conceptions of web-based learning in particular. By interviewing 83 Taiwanese college students with some web-based learning experiences, this study attempted to investigate the students' conceptions of learning, conceptions of web-based learning, and the differences between these conceptions. Using the phenomenographic method of analyzing student interview transcripts, several categories of conceptions of learning and of web-based learning were revealed. The analyses of interview results suggested that the conceptions of web-based learning were often more sophisticated than those of learning. For example, much more students conceptualized learning in web-based context as pursuing real understanding and seeing in a new way than those for learning in general. This implies that the implementation of web-based instruction may be a potential avenue for promoting students' conceptions of learning. By gathering questionnaire responses from the students, this study further found that the sophistication of the conceptions toward web-based learning was associated with better searching strategies as well as higher self-efficacy for web-based learning. ", "keywords": "post-secondary education;distance education and telelearning", "title": "Conceptions of learning versus conceptions of web-based learning: The differences revealed by college students"} {"abstract": "Cigar functions are convex quadratic functions that are characterised by the presence of only two distinct eigenvalues of their Hessian, the smaller one of which occurs with multiplicity one. Their ridge-like topology makes them a useful test case for optimisation strategies. This paper extends previous work on modelling the behaviour of evolution strategies with isotropically distributed mutations optimising cigar functions by considering weighted recombination as well as the effects of noise on optimisation performance. It is found that the same weights that have previously been seen to be optimal for the sphere and parabolic ridge functions are optimal for cigar functions as well. The influence of the presence of noise on optimisation performance depends qualitatively on the trajectory of the search point, which in turn is determined by the strategy's mutation strength as well as its population size and recombination weights. Analytical results are obtained for the case of cumulative step length adaptation.", "keywords": "evolution strategy;weighted recombination;cumulative step length adaptation;cigar function;noise", "title": "on the behaviour of weighted multi-recombination evolution strategies optimising noisy cigar functions"} {"abstract": "In this paper, we present human emotion recognition systems based on audio and spatio-temporal visual features. The proposed system has been tested on audio visual emotion data set with different subjects for both genders. The mel-frequency cepstral coefficient (MFCC) and prosodic features are first identified and then extracted from emotional speech. For facial expressions spatio-temporal features are extracted from visual streams. Principal component analysis (PCA) is applied for dimensionality reduction of the visual features and capturing 97% of variances. Codebook is constructed for both audio and visual features using Euclidean space. Then occurrences of the histograms are employed as input to the state-of-the-art SVM classifier to realize the judgment of each classifier. Moreover, the judgments from each classifier are combined using Bayes sum rule (BSR) as a final decision step. The proposed system is tested on public data set to recognize the human emotions. Experimental results and simulations proved that using visual features only yields on average 74.15% accuracy, while using audio features only gives recognition average accuracy of 67.39%. Whereas by combining both audio and visual features, the overall system accuracy has been significantly improved up to 80.27%.", "keywords": "human computer interface ;multimodal system;human emotions;support vector machines ;spatio-temporal features", "title": "Human emotion recognition from videos using spatio-temporal and audio features"} {"abstract": "This article provides a comprehensive review of the currently available technologies for vitamin and mineral rice fortification. It covers currently used technologies, such as coating, dusting, and the various extrusion technologies, with the main focus being on cold, warm, and hot extrusion technologies, including process flow, required facilities, and sizes of operation. The advantages and disadvantages of the various processing methods are covered, including a discussion on micronutrients with respect to their technical feasibility during processing, storage, washing, and various cooking methods and their physiological importance. The microstructure of fortified rice kernels and their properties, such as visual appearance, sensory perception, and the impact of different micronutrient formulations, are discussed. Finally, the article covers recommendations for quality control and provides a summary of clinical trials.", "keywords": "rice fortification;technologies;nutrients;vitamins;minerals", "title": "Fortification of rice: technologies and nutrients"} {"abstract": "Traditional techniques of perceptual mapping hypothesize that products are differentiated in a common perceptual space of attributes. This paper suggests that each product is differentiated not only in a common perceptual space, but also a unique perceptual space consisting of as many dimensions as the number of products. It provides a model and estimation procedure based on alternating least squares for estimating the model parameters.", "keywords": "product differentiation;product uniqueness;brand image;three-way data;multidimensional scaling;proximities", "title": "A perceptual mapping procedure for analysis of proximity data to determine common and unique product-market structures"} {"abstract": "In this paper we describe the design concepts and prototype implementation of a situation aware ubiquitous computing system using multiple modalities such as National Marine Electronics Association (NMEA) data from global positioning system (GPS) receivers, text, speech, environmental audio, and handwriting inputs. While most mobile and communication devices know where and who they are, by accessing context information primarily in the form of location, time stamps, and user identity, the concept of sharing of this information in a reliable and intelligent fashion is crucial in many scenarios. A framework which takes the concept of context aware computing to the level of situation aware computing by intelligent information exchange between context aware devices is designed and implemented in this work. Four sensual modes of contextual information like text, speech, environmental audio, and handwriting are augmented to conventional contextual information sources like location from GPS, user identity based on IP addresses (IPA), and time stamps. Each device derives its context not necessarily using the same criteria or parameters but by employing selective fusion and fission of multiple modalities. The processing of each individual modality takes place at the client device followed by the summarization of context as a text file. Exchange of dynamic context information between devices is enabled in real time to create multimodal situation aware devices. A central repository of all user context profiles is also created to enable self-learning devices in the future. Based on the results of simulated situations and real field deployments it is shown that the use of multiple modalities like speech, environmental audio, and handwriting inputs along with conventional modalities can create devices with enhanced situational awareness.", "keywords": "bidirectional ftp synchronization;environmental audio;gps;speech recognition;ubiquitous computing", "title": "On the Design and Prototype Implementation of a Multimodal Situation Aware System"} {"abstract": "Breast cancer continues to be the most common cause of cancer deaths in women. Early detection of breast cancer is significant for better prognosis. Digital Mammography currently offers the best control strategy for the early detection of breast cancer. The research work in this paper investigates the significance of neural-association of microcalcification patterns for their reliable classification in digital mammograms. The proposed technique explores the auto-associative abilities of a neural network approach to regenerate the composite of its learned patterns most consistent with the new information, thus the regenerated patterns can uniquely signify each input class and improve the overall classification. Two types of features: computer extracted (gray level based statistical) features and human extracted (radiologists' interpretation) features are used for the classification of calcification type of breast abnormalities. The proposed technique attained the highest 90.5% classification rate on the calcification testing dataset.", "keywords": "neural networks;auto-associator;classifier;feature extraction;digital mammography", "title": "Neural-association of microcalcification patterns for their reliable classification in digital mammography"} {"abstract": "Pulmonary embolism is the third leading cause of death in hospitalized patients in the US. Vena cava filters are medical devices inserted into the inferior vena cava (IVC) and are designed to trap thrombi before they reach the lungs. Once trapped in a filter, however, thrombi disturb otherwise natural flow patterns, which may be clinically significant. The goal of this work is to use computational modeling to study the hemodynamics of an unoccluded and partially occluded IVC under rest and exercise conditions. A realistic, three-dimensional model of the IVC, iliac, and renal veins represents the vessel geometry and spherical clots represent thombi trapped by several conical filter designs. Inflow rates correspond to rest and exercise conditions, and a transitional turbulence model captures transitional flow features, if they are present. The flow equations are discretized and solved using a second-order finite-volume method. No significant regions of transitional flow are observed. Nonetheless, the volume of stagnant and recirculating flow increases with partial occlusion and exercise. For the partially occluded vessel, large wall shear stresses are observed on the IVC and on the model thrombus, especially under exercise conditions. These large wall shear stresses may have mixed clinical implications: thrombotic-like behavior may initiate on the vessel wall, which is undesirable; and thrombolysis may be accelerated, which is desirable.", "keywords": "cfd;filter;thrombosis;vena cava;wall shear stress", "title": "Modeling hemodynamics in an unoccluded and partially occluded inferior vena cava under rest and exercise conditions"} {"abstract": "During the past few years, algorithmic improvements alone have reduced the time required for the direct solution of unsymmetric sparse systems of linear equations by almost an order of magnitude. This paper compares the performance of some well-known software packages for solving general sparse systems. In particular, it demonstrates the consistently high level of performance achieved by WSMP-the most recent of such solvers. It compares the various algorithmic components of these solvers and discusses their impact on solver performance. Our experiments show that the algorithmic choices made in WSMP enable it to run more than twice as fast as the best among similar solvers and that WSMP can factor some of the largest sparse matrices available from real applications in only a few seconds on a 4-CPU workstation. Thus, the combination of advances in hardware and algorithms makes it possible to solve those general sparse linear systems quickly and easily that might have been considered too large until recently.", "keywords": "algorithms;performance;sparse matrix factorization;sparse lu decomposition;multifrontal method;parallel sparse solvers", "title": "Recent advances in direct methods for solving unsymmetric sparse systems of linear equations"} {"abstract": "Quality function deployment (QFD) is a planning tool used in new product development and quality management. It aims at achieving maximum customer satisfaction by listening to the voice of customers. To implement QFD, customer requirements (CRs) should be identified and assessed first. The current paper proposes a linear goal programming (LGP) approach to assess the relative importance weights of CRs. The LGP approach enables customers to express their preferences on the relative importance weights of CRs in their preferred or familiar formats, which may differ from one customer to another but have no need to be transformed into the same format, thus avoiding information loss or distortion. A numerical example is tested with the LGP approach to demonstrate its validity, effectiveness and potential applications in QFD practice. ", "keywords": "quality function deployment;customer requirement;customer preference;preference format;goal programming;group decision making", "title": "A linear goal programming approach to determining the relative importance weights of customer requirements in quality function deployment"} {"abstract": "Precomputed radiance transfer (PRT) captures realistic lighting effects from distant, low-frequency environmental lighting but has been limited to static models or precomputed sequences. We focus on PRT for local effects such as bumps, wrinkles, or other detailed features, but extend it to arbitrarily deformable models. Our approach applies zonal harmonics (ZH) which approximate spherical functions as sums of circularly symmetric Legendre polynomials around different axes. By spatially varying both the axes and coefficients of these basis functions, we can fit to spatially varying transfer signals. Compared to the spherical harmonic (SH) basis, the ZH basis yields a more compact approximation. More important, it can be trivially rotated whereas SH rotation is expensive and unsuited for dense per-vertex or per-pixel evaluation. This property allows, for the first time, PRT to be mapped onto deforming models which re-orient the local coordinate frame. We generate ZH transfer models by fitting to PRT signals simulated on meshes or simple parametric models for thin membranes and wrinkles. We show how shading with ZH transfer can be significantly accelerated by specializing to a given lighting environment. Finally, we demonstrate real-time rendering results with soft shadows, inter-reflections, and subsurface scatter on deforming models.", "keywords": "lighting environments;nonlinear optimization;spherical;harmonics;soft shadows;subsurface scattering;texture maps;zonal harmonics", "title": "Local, deformable precomputed radiance transfer"} {"abstract": "A spatial semi-discretization is developed for the two-dimensional depth-averaged shallow water equations on a non-equidistant structured and staggered grid. The vector identities required for energy conservation in the continuous case are identified. Discrete analogues are developed, which lead to a finite-volume semi-discretisation which conserves mass, momentum, and energy simultaneously. The key to discrete energy conservation for the shallow water equations is to numerically distinguish storage of momentum from advective transport of momentum. Simulation of a large-amplitude wave in a basin confirms the conservative properties of the new scheme, and demonstrates the enhanced robustness resulting from the compatibility of continuity and momentum equations. The scheme can be used as a building block for constructing fully conservative curvilinear, higher order, variable density, and non-hydrostatic discretizations. ", "keywords": "shallow water equations;energy-conservation;fully conservative;mimetic;finite-volume;finite-difference;symmetry preservation;staggered;c-grid", "title": "A mimetic mass, momentum and energy conserving discretization for the shallow water equations"} {"abstract": "In this paper, we present a novel concept named semantic component for 3D object search which describes a key component that semantically defines a 3D object. In most cases, the semantic component is intra-category stable and therefore can be used to construct an efficient 3D object retrieval scheme. By segmenting an object into segments and learning the similar segments shared by all the objects in the same category, we can summarise what human uses for object recognition, from the analysis of which we develop a method to find the semantic component of an object. In our experiments, the proposed method is justified and the effectiveness of our algorithm is also demonstrated.", "keywords": "semantic component;3d object search", "title": "3d object search through semantic component"} {"abstract": "Provides an executable formal Real-Time Maude semantics for Timed Rebeca. Integrates Real-Time Maude analysis into the Rebeca toolchain. Provides an efficient semantics using partial-order-reduction-like techniques. Shows the performance gained by this optimization.", "keywords": "real-time actors;timed rebeca;formal semantics;model checking;real-time maude", "title": "Formal semantics and efficient analysis of Timed Rebeca in Real-Time Maude"} {"abstract": "A new bonding-tool solution is proposed to improve stitch bondability by creating a new surface morphology on the tip surface of a wire-bonding tool (capillary). The surface has relatively deep lines with no fixed directions. This new capillary has less slipping between the wire and the capillary tip surface and provides better coupling effect between them. Experiments of wire bonding on unstable lead frames/substrates, alloyed wire (2N gold wire) bonding, and copper wire bonding were carried out to confirm the effect of the new capillary on the stitch bondability. The experimental results are promising and have proved that the use of the new capillary could improve the bondability of the stitch bond and minimize the occurrence of short tail defects and non-sticking on lead during bonding.", "keywords": "microelectronics packaging;wire bond;stitch bondability;capillary;copper wire bonding", "title": "A new bonding-tool solution to improve stitch bondability"} {"abstract": "This paper presents a new approach to local instruction scheduling based on integer programming that produces optimal instruction schedules in a reasonable time, even for very large basic blocks. The new approach first uses a set of graph transformations to simplify the data-dependency graph while preserving the optimality of the final schedule. The simplified graph results in a simplified integer program which can be solved much faster. A new integer-programming formulation is then applied to the simplified graph. Various techniques are used to simplify the formulation, resulting in fewer integer-program variables, fewer integer-program constraints and fewer terms in some of the remaining constraints, thus reducing integer-program solution time. The new formulation also uses certain adaptively added constraints (cuts) to reduce solution time. The proposed optimal instruction scheduler is built within the Gnu Compiler Collection (GCC) and is evaluated experimentally using the SPEC95 floating point benchmarks. Although optimal scheduling for the target processor is considered intractable, all of the benchmarks' basic blocks are optimally scheduled, including blocks with up to 1000 instructions, while total compile time increases by only 14%.", "keywords": "processor;variability;instruction scheduling;scheduling;optimality;benchmark;data dependence;collect;instruction;compilation;graph;floating point;timing;graph transformation;paper;constraint;integer programming;locality", "title": "optimal instruction scheduling using integer programming"} {"abstract": "Web services are emerging as a major technology for building service-oriented distributed systems. Potentially, various resources on the Internet can be virtualized as Web services for a wider use by their communities. Service discovery becomes an issue of vital importance for Web services applications. This article presents ROSSE, a Rough Sets based Search Engine for Web service discovery. One salient feature of ROSSE lies in its capability to deal with uncertainty of service properties when matching services. A use case is presented to demonstrate the use of ROSSE for discovery of car services. ROSSE is evaluated in terms of its accuracy and efficiency in service discovery.", "keywords": "owl-s;rough sets;service matchmaking;web service discovery", "title": "Web Services Discovery with Rough Sets"} {"abstract": "We propose an optimal time adjustment method from the viewpoint of frequency stability, which is defined as the Allan deviation. When time adjustment is needed for a clock in a networked computer, it is made over a period called a time adjustment period. The proposed method optimizes frequency stability for a given time adjustment period. This method has been evaluated and compared with the adjtime() system call in UNIX systems in terms of frequency stability and duration of time adjustment period needed for achieving particular values of frequency stability. For time intervals from 1 to 1; 000 s, the frequency stability achieved by the proposed method was about 0.01-0.5 of that achieved by the adjtime() system call. The evaluation also showed that the duration of a time adjustment period needed for achieving the frequency stability of 1.0 x 10(-10) in the proposed method was less than 1/12 (1/6) that of the period in the adjtime() system call when we optimized frequency stability for a 60 (3,600) s time interval under the condition that the duration of the time-adjustment period was 12 h.", "keywords": "clock synchronization;allan deviation;allan variance;frequency stability", "title": "Stability-Optimized Time Adjustment for a Networked Computer Clock"} {"abstract": "The quality of mobile services in the mobile and wireless world is ultimately judged in terms of customer satisfaction. This is particularly true for the third generation (3G) and beyond multimedia mobile services which should meet or exceed customer expectations. In this study Quality Function Deployment (QFD) is used for the first time as a quality improvement approach for building customers' requirements into mobile services. Traditionally QFD approach is adopted in product and manufacturing industries. In this paper QFD is extended to mobile service industry which is such a promising industry in today's information society. This paper proposes a generic framework based on QFD concepts and practices to improve mobile service design and development. An example is presented to illustrate the use of QFD for mobile e-learning services for university students and lecturers. The data transmission speed is found to be the most critical requirement in mobile e-learning services. By the use of QFD the developed mobile services can best meet customers' requirements or even exceed their expectations. At the end of this paper some benefits as well as further improvements regarding QFD approach are discussed and concluded.", "keywords": "mobile service;e-learning;quality function deployment;customer requirement;voice of customer;house of quality", "title": "Improving mobile services design: A QFD approach"} {"abstract": "This study examines the research performance and international research collaborations (IRC) of ASEAN nations in the area of economics. Over the last 3 decades international collaborated papers have increased in the region, while locally-co-authored papers have declined. Singapore towered among ASEAN nations in research efficiency based on geographical area, population and GDP. Vietnam performed relatively better in research efficiency than research productivity (number of papers produced), while Indonesia performed poorly. Overall, internationally co-authored papers were cited twice as often as locally authored papers except that both The Philippines and Indonesia exhibited almost no difference in how their local and internationally co-authored papers were cited. The study also examined IRC from the perspective of social networks. Centrality had a strong correlation with research performance; however, vertex tie-strength (a result of repeat collaboration) showed maximum correlation with research performance. While Malaysia emerged as the nation with the highest betweenness centrality or bridging power, the US emerged as the most favoured international partner of ASEAN nations. However, collaboration between ASEAN countries accounted for just 4% of all international collaborations. Increased academic mobility and more joint scientific works are suggestions to consider to boost educational co-operation among the ASEAN nations.", "keywords": "international research collaborations;research efficiency;social networks;asean;economics", "title": "International research collaborations of ASEAN Nations in economics, 19792010"} {"abstract": "Multi-agent teamwork is critical in a large number of agent applications, including training, education, virtual enterprises and collective robotics. The complex interactions of agents in a team as well as with other agents make it extremely difficult for human developers to understand and analyze agent-teambehavior. It has thus become increasingly important to develop tools that can help humans analyze, evaluate, and understand team behaviors. However, the problem of automated team analysis is largely unaddressed in previous work. In this article, we identify several key constraints faced by team analysts. Most fundamentally, multiple types of models of team behavior are necessary to analyze different granularities of team events, including agent actions, interactions, and global performance. In addition, effective ways of presenting the analysis to humans is critical and the presentation techniques depend on the model being presented. Finally, analysis should be independent of underlying team architecture and implementation. We also demonstrate an approach to addressing these constraints by building an automated team analyst called ISAAC for post-hoc, off-line agent-team analysis. ISAAC acquires multiple, heterogeneous team models via machine learning over teams' external behavior traces, where the specific learning techniques are tailored to the particular model learned. Additionally, ISAAC employs multiple presentation techniques that can aid human understanding of the analyses. ISAAC also provides feedback on team improvement in two novel ways: (i) It supports principled \"what-if'' reasoning about possible agent improvements; (ii) It allows the user to compare different teams based on their patterns of interactions. This paper presents ISAAC's general conceptual framework, motivating its design, as well as its concrete application in two domains: ( i) RoboCup Soccer; ( ii) software agent teams participating in a simulated evacuation scenario. In the RoboCup domain, ISAAC was used prior to and during the RoboCup '99 tournament, and was awarded the RoboCup Scientific Challenge Award. In the evacuation domain, ISAAC was used to analyze patterns of message exchanges among software agents, illustrating the generality of ISAAC's techniques. We present detailed algorithms and experimental results from ISAAC's application.", "keywords": "teamwork;analysis;multiagent systems", "title": "Automated assistants for analyzing team behaviors"} {"abstract": "Clustering has been well received as one of the effective solutions to enhance energy efficiency and scalability of large-scale wireless sensor networks. The goal of clustering is to identify a subset of nodes in a wireless sensor network, then all the other nodes communicate with the network sink via these selected nodes. However, many current clustering algorithms are tightly coupled with exact sensor locations derived through either triangulation methods or extra hardware such as GPS equipment. However, in practice, it is very difficult to know sensor location coordinates accurately due to various factors such as random deployment and low-power, low-cost sensing devices. Therefore, how to develop an adaptive clustering algorithm without relying on exact sensor location information is a very important yet challenging problem. In this paper, we try to address this problem by proposing a new adaptive clustering algorithm for energy efficiency of wireless sensor networks. Compared with other work having been done in this area, our proposed adaptive clustering algorithm is original because of its capability to infer the location information by mining wireless sensor energy data. Furthermore, based on the inferred location information and the remaining (residual) energy level of each node, the proposed clustering algorithm will dynamically change cluster heads for energy efficacy. Simulation results show that the proposed adaptive clustering algorithm is efficient and effective for energy saving in wireless sensor networks. ", "keywords": "adaptive clustering;wireless sensor networks;network management;data mining", "title": "Adaptive clustering in wireless sensor networks by mining sensor energy data"} {"abstract": "Polynomial Support Vector Machine models of degree d are linear functions in a feature space of monomials of at most degree d. However, the actual representation is stored in the form of support vectors and Lagrange multipliers that is unsuitable for human understanding. An efficient, heuristic method for searching the feature space of a polynomial Support Vector Machine model for those features with the largest absolute weights is presented. The time complexity of this method is Theta(dms(2) + sdp), where m is the number of variables, d the degree of the kernel, s the number of support vectors, and p the number of features the algorithm is allowed to search. In contrast, the brute force approach of constructing all weights and then selecting the largest weights has complexity Theta(sd((m+d)(d))). The method is shown to be effective in identifying the top-weighted features on several simulated data sets, where the true weight vector is known. Additionally, the method is run on several high-dimensional, real world data sets where the features returned may be used to construct classifiers with classification performances similar to models built with all or subsets of variables returned by variable selection methods. This algorithm provides a new ability to understand, conceptualize, visualize, and communicate polynomial SVM models and has implications for feature construction, dimensionality reduction, and variable selection.", "keywords": "support vector machines;classification;variable selection", "title": "To feature space and back: Identifying top-weighted features in polynomial Support Vector Machine models"} {"abstract": "We propose a novel design of an artificial robot ear for sound direction estimation using two measured outputs only. The spectral features in the interaural transfer functions (ITFs) of the proposed artificial ears are distinctive and move monotonically according to the sound direction. Thus, these features provide effective sound cues to estimate sound direction using the measured two output signals. Bilateral asymmetry of microphone positions can enhance the estimation performance even in the median plane where interaural differences vanish. We propose a localization method to estimate the lateral and vertical angles simultaneously. The lateral angle is estimated using interaural time difference and Woodworth and Schlosberg's formula, and the front-back discrimination is achieved by finding the spectral features in the ITF estimated from two measured outputs. The vertical angle of a sound source in the frontal region is estimated by comparing the spectral features in the estimated ITF with those in the database built in an anechoic chamber. The feasibility of the designed artificial ear and the estimation method were verified in a real environment. In the experiment, it was shown that both the front-back discrimination and the sound direction estimation in the frontal region can be achieved with reasonable accuracy. Thus, we expect that robots with the proposed artificial ear can estimate the direction of speaker from two output signals only. ", "keywords": "sound direction estimation;artificial ear;human-robot interaction;head-related transfer function;interaural transfer function", "title": "Sound direction estimation using an artificial ear for robots"} {"abstract": "Many studies from a variety of countries have shown a U- or J-shaped relation between alcohol intake and mortality from all causes. It is now quite well documented from epidemiologic as well as clinical and experimental studies that the descending leg of the curve results from a decreased risk of cardiovascular disease among those with light-to-moderate alcohol consumption. The findings that wine drinkers are at a decreased risk of mortality from cardiovascular disease compared to non-wine drinkers suggest that substances present in wine are responsible for a beneficial effect on the outcome, in addition to that from a light intake of ethanol. Several potential confounding factors still remain to be excluded, however.", "keywords": "coronary heart disease;alcohol intake;flavonoids;wine", "title": "Alcohol, Type of Alcohol, and All-Cause and Coronary Heart Disease Mortality"} {"abstract": "This paper describes our experience using coordinated atomic (CA) actions as a system structuring tool to design and validate a sophisticated and embedded control system for a complex industrial application that has high reliability and safety requirements. Our study is based on an extended production cell model, the specification and simulator for which were defined and developed by FZI (Forschungszentrum Informatik, Germany). This \"Fault-Tolerant Production Cell\" represents a manufacturing process involving redundant mechanical devices (provided in order to enable continued production in the presence of machine faults). The challenge posed by the model specification is to design a control system that maintains specified safety and liveness properties even in the presence of a large number and variety of device and sensor failures. Based on an analysis of such failures, we provide in this paper details of: 1) a design for a control program that uses CA actions to deal with both safety-related and fault tolerance concerns and 2) the formal verification of this design based on the use of model-checking. We found that CA action structuring facilitated both the design and verification tasks by enabling the various safety problems (involving possible clashes of moving machinery) to be treated independently. Even complex situations involving the concurrent occurrence of any pairs of the many possible mechanical and sensor failures can be handled simply yet appropriately. The formal verification activity was performed in parallel with the design activity and the interaction between them resulted in a combined exercise in \"design for validation\"; formal verification was very valuable in identifying some very subtle residual bugs in early versions of our design which would have been difficult to detect otherwise.", "keywords": "concurrency;coordinated atomic actions;embedded fault-tolerant systems;exception handling;object orientation;formal verification;model checking;reliability;safety", "title": "Rigorous development of an embedded fault-tolerant system based on coordinated atomic actions"} {"abstract": "In this paper, we present a novel technique of improving volume rendering quality and speed by integrating original volume data and global model information attained by segmentation. The segmentation information prevents object occlusions that may appear when volume rendering is based on local image features only. Thus the presented visualization technique provides meaningful visual results that enable a clear understanding of complex anatomical structures. In the first part, we describe a segmentation technique for extracting the region of interest based on an active contour model. In the second part, we propose a volume rendering method for visualizing the selected portions of fuzzy surfaces extracted by local image processing methods. We show the results of selective volume rendering of left and right ventricle based on cardiac datasets from clinical routines. Our method offers an accelerated technique to accurately visualize the surfaces of segmented objects.", "keywords": "cardiology;medical imaging;selective volume rendering;direct volume rendering;distance transform;segmentation", "title": "Ventricular shape visualization using selective volume rendering of cardiac datasets"} {"abstract": "Mutual Information (MI) is popular for registration via function optimization. This work proposes an inverse compositional formulation of MI for Levenberg-Marquardt optimization. This yields a constant Hessian, which may be precomputed. Speed improvements of 15 percent were obtained, with convergence accuracies similar those of the standard formulation.", "keywords": "mutual information;registration;newton optimization;tracking", "title": "Mutual information for Lucas-Kanade tracking (MILK): An inverse compositional formulation"} {"abstract": "Conventional testing methods often fail to detect hidden flaws in complex embedded software such as device drivers or file systems. This deficiency incurs significant development and support/maintenance cost for the manufacturers. Model checking techniques have been proposed to compensate for the weaknesses of conventional testing methods through exhaustive analyses. Whereas conventional model checkers require manual effort to create an abstract target model, modern software model checkers remove this overhead by directly analyzing a target C program, and can be utilized as unit testing tools. However, since software model checkers are not fully mature yet, they have limitations according to the underlying technologies and tool implementations, potentially critical issues when applied in industrial projects. This paper reports our experience in applying Blast and CBMC to testing the components of a storage platform software for flash memory. Through this project, we analyzed the strong and weak points of two different software model checking technologies in the viewpoint of real-world industrial application-counterexample-guided abstraction refinement with predicate abstraction and SAT-based bounded analysis.", "keywords": "embedded software verification;software model checking;bounded model checking;cegar-based model checking;flash file systems", "title": "A Comparative Study of Software Model Checkers as Unit Testing Tools: An Industrial Case Study"} {"abstract": "In this paper, we have described the preparation of a benchmark database for research on off-line Optical Character Recognition (OCR) of document images of handwritten Bangla text and Bangla text mixed with English words. This is the first handwritten database in this area, as mentioned above, available as an open source document. As India is a multi-lingual country and has a colonial past, so multi-script document pages are very much common. The database contains 150 handwritten document pages, among which 100 pages are written purely in Bangla script and rests of the 50 pages are written in Bangla text mixed with English words. This database for off-line-handwritten scripts is collected from different data sources. After collecting the document pages, all the documents have been preprocessed and distributed into two groups, i.e., CMATERdb1.1.1, containing document pages written in Bangla script only, and CMATERdb1.2.1, containing document pages written in Bangla text mixed with English words. Finally, we have also provided the useful ground truth images for the line segmentation purpose. To generate the ground truth images, we have first labeled each line in a document page automatically by applying one of our previously developed line extraction techniques [Khandelwal et al., PReMI 2009, pp. 369-374] and then corrected any possible error by using our developed tool GT Gen 1.1. Line extraction accuracies of 90.6 and 92.38% are achieved on the two databases, respectively, using our algorithm. Both the databases along with the ground truth annotations and the ground truth generating tool are available freely at http://code.google.com/p/cmaterdb.", "keywords": "unconstrained handwritten document image database;text line extraction;ground truth preparation;ocr of multi-script document", "title": "CMATERdb1: a database of unconstrained handwritten Bangla and Bangla-English mixed script document image"} {"abstract": "A time domain boundary element method (BEM) is presented to model the quasi-static linear viscoelastic behavior of asphalt pavements. In the viscoelastic analysis, the fundamental solution is derived in terms of elemental displacement discontinuities (DDs) and a boundary integral equation is formulated in the time domain. The unknown DDs are assumed to vary quadratically in the spatial domain and to vary linearly in the time domain. The equation is then solved incrementally through the whole time history using an explicit time-marching approach. All the spatial and temporal integrations can be performed analytically, which guarantees the accuracy of the method and the stability of the numerical procedure. Several viscoelastic models such as Boltzmann, Burgers, and power-law models are considered to characterize the time-dependent behavior of linear viscoelastic materials. The numerical method is applied to study the load-induced stress redistribution and its effects on the cracking performance of asphalt pavements. Some benchmark problems are solved to verify the accuracy and efficiency of the approach. Numerical experiments are also carried out to demonstrate application of the method in pavement engineering.", "keywords": "time domain bem;viscoelasticity;displacement discontinuities;time marching;viscoelastic models;asphalt pavements;stress redistribution", "title": "A time domain boundary element method for modeling the quasi-static viscoelastic behavior of asphalt pavements"} {"abstract": "This paper presents a decision support system devoted to the selection of films for the International Animated Film Festival organized at Annecy, France, every year. It deals with the representation and aggregation of referees' preferences along predefined criteria in addition to their overall selection point of view. The practical requirements associated with this application (often encountered in social or cultural areas as well) are: a common ordinal scale for the criteria scores, a procedure to deal with inconsistencies between criteria and overall scores, explanation tools of each referee's preference model in order to facilitate the deliberation process and also to argument the selection decision. The processing of referees' preferences is achieved thanks to a recent method which consists in finding a generalized mean aggregation operator representing the preferences of a referee, in a finite ordinal scale context. The method allows to deal with consistency conditions on referees' behaviour in order to highlight the criteria or pair of criteria which are the most influential for each of the referees. All the functionalities have been implemented in an interactive decision software that facilitates a shared selection decision. Results issued from the 2007 selection are presented and analysed from the preference representation and processing point of view. ", "keywords": "animated film selection;ordinal scale;expert preference representation;multi-criteria aggregation", "title": "A decision support system for animated film selection based on a multi-criteria aggregation of referees' ordinal preferences"} {"abstract": "Fuzzy Formal Concept Analysis (FFCA) is a generalization of Formal Concept Analysis (FCA) for modeling uncertainty information. FFCA provides a mathematical framework which can support the construction of formal ontologies in the presence of uncertainty data for the development of the Semantic Web. In this paper, we show how rough set theory can be employed in combination with FFCA to perform Semantic Web search and discovery of information in the Web.", "keywords": "semantic web;formal concept analysis;fuzzy information;rough set theory;formal concept", "title": "Semantic Web search based on rough sets and Fuzzy Formal Concept Analysis"} {"abstract": "The union graph is assumed to be strongly connected over each finite interval. An approach is proposed to transform the original network to a synchronous one. We show that the linear part converges and the projection error vanishes over time.", "keywords": "constrained consensus;multi-agent system;asynchronous communication;distributed control", "title": "Constrained consensus of asynchronous discrete-time multi-agent systems with time-varying topology"} {"abstract": "A Bernays-like axiomatic theory of intuitionistic fuzzy sets involving five primitives and seven axioms is presented. ", "keywords": "fuzzy sets;intuitionistic fuzzy sets;axiomatic theory", "title": "Axiomatic theory of intuitionistic fuzzy sets"} {"abstract": "Methodology to increase the flat inspection area Useful in exposition for die side edge. Platinum (Pt) deposition technique to form a protection mask Pt deposition to slow down the edging effect", "keywords": "top-down polishing method;large exposition;edging effect;platinum deposition", "title": "Top-down delayering to expose large inspection area on die side-edge with Platinum (Pt) deposition technique"} {"abstract": "Whereas Operations Research concentrates on optimization, practitioners find the robustness of a proposed solution more important. Therefore this paper presents a practical methodology that is a stagewise combination of four proven techniques: (1) simulation, (2) optimization, (3) risk or uncertainty analysis, and (4) bootstrapping. This methodology is illustrated through a production-control study. That illustration defines robustness as the capability to maintain short-term service, in a variety of environments (scenarios); that is, the probability of the short-term fill-rate remains within a prespecified range. Besides satisfying this probabilistic constraint, the system minimizes expected long-term work-in-process. Actually, the example compares four systemsnamely, Kanban, Conwip, Hybrid, and Genericfor the well-known case of a production line with four stations and a single product. The conclusion is that in this particular example, Hybrid is best when risk is not ignored; otherwise Generic is best; that is, risk considerations do make a difference.", "keywords": "risk analysis;robustness and sensitivity analysis;scenarios;manufacturing;inventory", "title": "Short-term robustness of production management systems: A case study"} {"abstract": "The purpose of this paper was to investigate the structure of semi-Heyting chains and the variety ({{mathcal{CSH}}}) generated by them. We determine the number of non-isomorphic n-element semi-Heyting chains. As a contribution to the study of the lattice of subvarieties of ({{mathcal{CSH}}},) we investigate the inclusion relation between semi-Heyting chains. Finally, we provide equational bases for ({{mathcal{CSH}}}) and for the subvarieties of ({{mathcal{CSH}}}) introduced in [5].", "keywords": "heyting algebras;varieties;semi-heyting algebras", "title": "The variety generated by semi-Heyting chains"} {"abstract": "This paper presents an analytical approach for performing fault tree analysis (FTA) with stochastic self-loop events. The proposed approach uses the flow-graph concept, and moment generating function (MGF) to develop a new stochastic FTA model for computing the probability, mean time to occurrence, and standard deviation time to occurrence of the top event. The application of the method is demonstrated by solving one example.", "keywords": "fault tree;flow-graph;reliability model;stochastic failure analysis", "title": "Stochastic fault tree analysis with self-loop basic events"} {"abstract": "Since APL, reductions and scans have been recognized as powerful programming concepts. Abstracting an accumulation loop (reduction) and an update loop (scan), the concepts have efficient parallel implementations based on the parallel prefix algorithm. They are often included in high-level languages with a built-in set of operators such as sum, product, min, etc. MPI provides library routines for reductions that account for nearly nine percent of all MPI calls in the NAS Parallel Benchmarks (NPB) version 3.2. Some researchers have even advocated reductions and scans as the principal tool for parallel algorithm design.Also since APL, the idea of applying the reduction control structure to a user-defined operator has been proposed, and several implementations (some parallel) have been reported. This paper presents the first global-view formulation of user-defined scans and an improved global-view formulation of user-defined reductions, demonstrating them in the context of the Chapel programming language. Further, these formulations are extended to a message passing context (MPI), thus transferring global-view abstractions to local-view languages and perhaps signaling a way to enhance local-view languages incrementally. Finally, examples are presented showing global-view user-defined reductions \"cleaning up\" and/or \"speeding up\" portions of two NAS benchmarks, IS and MG. In consequence, these generalized reduction and scan abstractions make the full power of the parallel prefix technique available to both global- and local-view parallel programming.", "keywords": "reductions;mpi;parallel programming;scans;parallel prefix;chapel", "title": "global-view abstractions for user-defined reductions and scans"} {"abstract": "Boosting over weak classifiers is widely used in pedestrian detection. As the number of weak classifiers is large, researchers always use a sampling method over weak classifiers before training. The sampling makes the boosting process harder to reach the fixed target. In this paper, we propose a partial derivative guidance for weak classifier mining method which can be used in conjunction with a boosting algorithm. Using weak classifier mining method makes the sampling less degraded in the performance. It has the same effect as testing more weak classifiers while using acceptable time. Experiments demonstrate that our algorithm can process quicker than [1] algorithm in both training and testing, without any performance decrease. The proposed algorithms is easily extending to any other boosting algorithms using a window-scanning style and HOG-like features.", "keywords": "pedestrian detection;partial derivative;classifier mining;hog;boosting", "title": "Partial Derivative Guidance for Weak Classifier Mining in Pedestrian Detection"} {"abstract": "Operators, which apply to functions to produce functions, are an important component of APL. Despite their importance, their role is not well understood, and they are often lumped with functions in expositions of the language. This paper attempts to clarify the role of operators in APL by tracing their development, outlining possible future directions, and commenting briefly on their roles in other languages, both natural and programming.", "keywords": "role;program;direct;operability;traces;paper;future;developer;language;roles;component", "title": "the role of operators in apl"} {"abstract": "This paper presents a mathematical model of biological structures in relation to coronary arteries with atherosclerosis. A set of equations has been derived to compute blood flow through these transport vessels with variable axial and radial geometries. Three-dimensional reconstructions of diseased arteries from cadavers have shown that atherosclerotic lesions spiral through the artery. The theoretical framework is able to explain the phenomenon of lesion distribution in a helical pattern by examining the structural parameters that affect the flow resistance and wall shear stress. The study is useful for connecting the relationship between the arterial wall geometries and hemodynamics of blood. It provides a simple, elegant and non-invasive method to predict flow properties for geometrically complex pathology at micro-scale levels and with low computational cost.", "keywords": "atherosclerosis;axial and radial asymmetry;spiraling lesion;resistance to flow ratio;wall shear stress", "title": "Theoretical modeling of micro-scale biological phenomena in human coronary arteries"} {"abstract": "Dynamic, unanticipated adaptation of running systems is of interest in a variety of situations, ranging from functional upgrades to on-the-fly debugging or monitoring of critical applications. In this paper we study a particular form of computational reflection, called unanticipated partial behavioral reflection (UPBR), which is particularly well suited for unanticipated adaptation of real-world systems. Our proposal combines the dynamicity of unanticipated reflection, i.e., reflection that does not require preparation of the code of any sort, and the selectivity and efficiency of partial behavioral reflection (PBR). First, we propose unanticipated partial behavioral reflection which enables the developer to precisely select the required reifications, to flexibly engineer the metalevel and to introduce the metabehavior dynamically. Second, we present a system supporting unanticipated partial behavioral reflection in Squeak Smalltalk, called GEPPETTO, and illustrate its use with a concrete example of a web application. Benchmarks validate the applicability of our proposal as an extension to the standard reflective abilities of Smalltalk.", "keywords": "reflection;metaprogramming;metaobject protocol;smalltalk", "title": "Unanticipated partial behavioral reflection: Adapting applications at runtime"} {"abstract": "In supply chain management, to build strategic and strong relationships, firms should select best suppliers by applying appropriate method and selection criteria. In this paper, to handle ambiguity and fuzziness in supplier selection problem effectively, a new weighted additive fuzzy programming approach is developed. Firstly, linguistic values expressed as trapezoidal fuzzy numbers are used to assess the weights of the factors. By applying the distances of each factor between Fuzzy Positive Ideal Rating and Fuzzy Negative Ideal Rating, weights are obtained. Then applying suppliers' constraints, goals and weights of the factors, a fuzzy multi-objective linear model is developed to overcome the selection problem and assign optimum order quantities to each supplier. The proposed model is explained by a numerical example. ", "keywords": "supplier selection;fuzzy multi-objective linear model;linguistic variables;multi-criteria decision making", "title": "A weighted additive fuzzy programming approach for multi-criteria supplier selection"} {"abstract": "The FireGrid project aims to harness the potential of advanced forms of computation to support the response to large-scale emergencies (with an initial focus on the response to fires in the built environment). Computational models of physical phenomena are developed, and then deployed and computed on High Performance Computing resources to infer incident conditions by assimilating live sensor data from an emergency in real timeor, in the case of predictive models, faster-than-real time. The results of these models are then interpreted by a knowledge-based reasoning scheme to provide decision support information in appropriate terms for the emergency responder. These models are accessed over a Grid from an agent-based system, of which the human responders form an integral part. This paper proposes a novel FireGrid architecture, and describes the rationale behind this architecture and the research results of its application to a large-scale fire experiment.", "keywords": "emergency response;grid;high performance computing;multi-agent system;knowledge-based reasoning;fire simulation model", "title": "FireGrid: An e-infrastructure for next-generation emergency response support"} {"abstract": "In this paper, we consider a single batch machine scheduling problem with incompatible job families and dynamic job arrivals. The objective is to minimize the total completion time. This problem is known to be strongly NP-hard. We present several dominance properties and two types of lower bounds, which are incorporated to construct a basic branch and bound algorithm. Furthermore, according to the characteristics of dynamic job arrivals, a decomposed branch and bound algorithm is proposed to improve the efficiency. The proposed algorithms are tested on a large set of randomly generated problem instances.", "keywords": "branch and bound algorithm;dynamic arrivals;batch scheduling;incompatible job families", "title": "A branch and bound algorithm for minimizing total completion time on a single batch machine with incompatible job families and dynamic arrivals"} {"abstract": "Accurate predictions of chemical composition by physical properties of sour vaccum gas oil (VGO) fractions are important for the refinery. In this paper, a feed-forward type network based on genetic algorithm (GA), was developed and used for predicting saturates of sour vacuum gas oil. The number of neurons in the hidden layer, the momentum and the learning rates were determined by using the genetic algorithm. The five physical properties of sour VGO, namely, average boiling point, density at 20C, molecular weight, kinematic viscosity at 100C and refractive index at 70C were considered as input variables of the ANN and the saturates of sour VGO was used as output variable. The study shows that genetic algorithm could find the optimal networks architecture and parameters of the back-propagation algorithm. Further, the artificial neural network models based on genetic algorithm are tested and the results indicate that the adopted model is very suitable for the forecasting of saturates of sour VGO. Compared with other forecasting models, it can be found that this model can improve prediction accuracy.", "keywords": "saturates;sour;vacuum gas oil;prediction;artificial neural networks;genetic algorithm", "title": "Predicting saturates of sour vacuum gas oil using artificial neural networks and genetic algorithms"} {"abstract": "Wireless Sensor Networks (WSNs) represent a key technology for collecting important information from different sources in context-aware environments. Unfortunately, integrating devices from different architectures or wireless technologies into a single sensor network is not an easy task for designers and developers. In this sense, distributed architectures, such as service-oriented architectures and multi-agent systems, can facilitate the integration of heterogeneous sensor networks. In addition, the sensors capabilities can be expanded by means of intelligent agents that change their behavior dynamically. This paper presents the Hardware-Embedded Reactive Agents (HERA) platform. HERA is based on Services laYers over Light PHysical devices (SYLPH), a distributed platform which integrates a service-oriented approach into heterogeneous WSNs. As SYLPH, HERA can be executed over multiple devices independently of their wireless technology, their architecture or the programming language they use. However, HERA goes one step ahead of SYLPH and adds reactive agents to the platform and also a reasoning mechanism that provides HERA Agents with Case-Based Planning features that allow solving problems considering past experiences. Unlike other approaches, HERA allows developing applications where reactive agents are directly embedded into heterogeneous wireless sensor nodes with reduced computational resources.", "keywords": "distributed architectures;multi-agent systems;heterogeneous wireless sensor networks;embedded agents;case-based planning", "title": "Implementing a hardware-embedded reactive agents platform based on a service-oriented architecture over heterogeneous wireless sensor networks"} {"abstract": "As technology advances, streams of data can be rapidly generated in many real-life applications. This calls for stream mining, which searches for implicit, previously unknown, and potentially useful information---such as frequent patterns---that might be embedded in continuous data streams. However, most of the existing algorithms do not allow users to express the patterns to be mined according to their intentions, via the use of constraints. As a result, these unconstrained mining algorithms can yield numerous patterns that are not interesting to the users. Moreover, many existing tree-based algorithms assume that all the trees constructed during the mining process can fit into memory. While this assumption holds for many situations, there are many other situations in which it does not hold. Hence, in this paper, we develop efficient algorithms for stream mining of constrained frequent patterns in a limited memory environment . Our algorithms allow users to impose a certain focus on the mining process, discover from data streams all those frequent patterns that satisfy the user constraints, and handle situations where the available memory space is limited.", "keywords": "limited memory space;frequent itemsets;data streams;constraints;data mining", "title": "efficient algorithms for stream mining of constrained frequent patterns in a limited memory environment"} {"abstract": "The design (synthesis) of an analog electrical circuit entails the creation of both the topology and sizing (numerical values) of all of the circuit's components. There has previously been no general automated technique for automatically creating the design for an analog electrical circuit from a high-level statement of the circuit's desired behavior. This paper shows how genetic programming can be used to automate the design of eight prototypical analog circuits, including a lowpass filter, a highpass filter, a bandstop filter, a tri-state frequency discriminator circuit, a frequency-measuring circuit, a 60 dB amplifier, a computational circuit for the square root function, and a time-optimal robot controller circuit.", "keywords": "genetic programming;genetic algorithms;circuit synthesis;electrical circuits;design", "title": "Synthesis of topology and sizing of analog electrical circuits by means of genetic programming"} {"abstract": "In many decision making problems, a number of independent attributes or criteria are often used to individually rate an alternative from an agent's local perspective and then these individual ratings are combined to produce an overall assessment. Now, in cases where these individual ratings are not in complete agreement, the overall rating should be somewhere in between the extremes that have been suggested. However, there are many possibilities for the aggregated value. Given this, this paper systematically explores the space of possible compromise operators for such multi-attribute decision making problems. Specifically, we axiomatically identify the complete spectrum of such operators in terms of the properties they should satisfy, and show the main ones that are widely used-namely averaging operators, uninorms and nullnorms-represent only three of the nine types we identify. For each type, we then go onto analyse their properties and discuss how specific instances can actually be developed. Finally, to illustrate the richness of our framework, we show how a wide range of operators are needed to model the various attitudes that a user may have for aggregation in a given scenario (bidding in multi-attribute auctions). ", "keywords": "aggregation operator;uninorm;nullnorm risk;multi-attribute decision making;multi-attribute auction", "title": "A spectrum of compromise aggregation operators for multi-attribute decision making"} {"abstract": "We present a method for assessing categorical perception from continuous discrimination data. Until recently, categorical perception of speech has exclusively been measured by discrimination and identification experiments with a small number of different stimuli, each of which is presented multiple times. Experiments by Rogers and Davis (2009), however, suggest that using non-repeating stimuli yields a more reliable measure of categorization. If this idea is applied to a single phonetic continuum, the continuum has to be densely sampled and the obtained discrimination data is nearly continuous. In the present study, we describe a maximum-likelihood method that is appropriate for analysing such continuous discrimination data.", "keywords": "categorical perception;dense sampling;discrimination;maximum likelihood", "title": "Detecting categorical perception in continuous discrimination data"} {"abstract": "We propose a method for document ranking that combines a simple document-centric view of text, and fast evaluation strategies that have been developed in connection with the vector space model. The new method defines the importance of a term within a document qualitatively rather than quantitatively, and in doing so reduces the need for tuning parameters. In addition, the method supports very fast query processing, with most of the computation carried out on small integers, and dynamic pruning an effective option. Experiments on a wide range of TREC data show that the new method provides retrieval effectiveness as good as or better than the Okapi BM25 formulation, and variants of language models.", "keywords": "vector space model;method;strategies;dynamic;computation;retrieval;text;language model;data;efficiency/scale: architectures;efficient query evaluation;experience;similarity;tuning;documentation;connection;effect;evaluation;prune;query processing;text representation and indexing;compression;ranking", "title": "simplified similarity scoring using term ranks"} {"abstract": "Smart Objects and Internet of Things are two ideas that describe the future. The interconnection of objects can make them intelligent or expand their intelligence. This is achieved by a network that connects all the objects in the world. A network where most of the data traffic comes from objects instead of people. Cities, houses, cars or any other objects that come to life, respond, work and make their owners life easier. This is part of that future. But first, there are many basic problems that must be solved. In this paper we propose solutions for many of these problems: the interconnection of ubiquitous, heterogeneous objects and the generation of applications allow inexperienced people to interconnect them. For that purpose, we present three possible solutions: a Domain Specific Language capable of abstracting the application generation problem; a graphic editor that simplifies the creation of that DSL; and an IoT platform (Midgar) able to interconnect different objects between them. Through Midgar, you can register objects and create interconnection between ubiquitous and heterogeneous objects through a graphic editor that generates a model defined by the DSL. From this model, Midgar generates the interconnection defined by the user with the graphical editor.", "keywords": "internet of things;ubiquitous computing;sensor network;model driven engineering;domain specific language;smart objects", "title": "Midgar: Generation of heterogeneous objects interconnecting applications. A Domain Specific Language proposal for Internet of Things scenarios"} {"abstract": "ROC curves and cost curves are two popular ways of visualising classifier performance, finding appropriate thresholds according to the operating condition, and deriving useful aggregated measures such as the area under the ROC curve (AUC) or the area under the optimal cost curve. In this paper we present new findings and connections between ROC space and cost space. In particular, we show that ROC curves can be transferred to cost space by means of a very natural threshold choice method, which sets the decision threshold such that the proportion of positive predictions equals the operating condition. We call these new curves rate-driven curves, and we demonstrate that the expected loss as measured by the area under these curves is linearly related to AUC. We show that the rate-driven curves are the genuine equivalent of ROC curves in cost space, establishing a point-point rather than a point-line correspondence. Furthermore, a decomposition of the rate-driven curves is introduced which separates the loss due to the threshold choice method from the ranking loss (Kendall tau distance). We also derive the corresponding curve to the ROC convex hull in cost space; this curve is different from the lower envelope of the cost lines, as the latter assumes only optimal thresholds are chosen.", "keywords": "cost curves;roc curves;cost-sensitive evaluation;ranking performance;operating condition;kendall tau distance;area under the roc curve ", "title": "ROC curves in cost space"} {"abstract": "In this paper, a self-developing neural network model, namely the Growing Cell Structures (GCS) is characterized. In GCS each node (or cell) is associated with a local resource counter tau (t). We show that GCS has the conservation property by which the summation of all resource counters always equals s(1 - alpha)/alpha, thereby s is the increment added to tau (t) of the wining node after each input presentation and alpha (0 < alpha < 1.0) is the forgetting (i.e., decay) factor applied to tau (t) of non-wining nodes. The conservation property provides an insight into how GCS can maximize information entropy. The property is also employed to unveil the chain-reaction effect and race-condition which can greatly influence the performance of GCS. We show that GCS can perform better in terms of equi-probable criterion if the resource counters are decayed by a smaller alpha.", "keywords": "self-developing neural network;competitive learning;race-condition;topology;equi-probable criterion;chain-reaction effect", "title": "On the characteristics of growing cell structures (GCS) neural network"} {"abstract": "In recent years Evolutionary Computation has come of age, with Genetic Algorithms (GA) being possibly the most popular technique. A study is presented revealing the performance of a GA in determining the PID tuning parameters for a multivariable process, including decoupling controllers. The process used for this investigation is a distillation column which is a MIMO high-order, nonlinear system. The results indicate some limitations of using GAs for controller tuning when MIMO systems are involved.", "keywords": "genetic algorithm;tuning;pid;multivariable", "title": "limitations of multivariable controller tuning using genetic algorithms"} {"abstract": "Real-time traffic information collection and data fusion is one of the most important tasks in the advanced traffic management system (ATMS), and sharing traffic information to users is an essential part of the advance traveler information system (ATIS) among the intelligent transportation systems (ITS). Traditionally, sensor-based schemes or probing-vehicle based schemes have been used for collecting traffic information, but the coverage, cost, and real-time issues have remained unsolved. In this paper, a wiki-like collaborative real-time traffic information collection, fusion and sharing framework is proposed, which includes user-centric traffic event reacting mechanism, and automatic agent-centric traffic information aggregating scheme. Smart traffic agents (STA) developed for various front-end devices have the location-aware two-way real-time traffic exchange capability, and built-in event-reporting mechanism to allow users to report the real-time traffic events around their locations. In addition to collecting traffic information, the framework also integrates heterogeneous external real-time traffic information data sources and internal historical traffic information database to predict real-time traffic status by knowledge base system technique.", "keywords": "collective intelligence;traffic status prediction;smart traffic agent;intelligent transportation system ;knowledge-based system", "title": "Collaborative real-time traffic information generation and sharing framework for the intelligent transportation system"} {"abstract": "Compared older adults and healthcare providers perceptions on self-monitoring. Explored advantages in older adults voluntary use of self-monitoring. Identified challenges in older adults voluntary use of self-monitoring. Suggested design implications for older adults self-monitoring tools.", "keywords": "consumer health information;health communication;self management;independent living", "title": "Perspectives on wellness self-monitoring tools for older adults"} {"abstract": "Sentence retrieval is to retrieve query-relevant sentences in response to user query. However, limited information contained in sentence always incurs a lot of uncertainties, which heavily influence the retrieval performance. To solve this problem, Bayesian network, which has been accepted as one of the most promising methodologies to deal with information uncertainty, is explored. Correspondingly, three sentence retrieval models based on Bayesian network are proposed, i.e. BNSR model, BNSR_TR model and BNSR_CR model. BNSR model assumes independency between terms and shows certain improvement in retrieval performance. BNSR_TR and BNSR_CR models relax the assumption of term independency but consider term relationships from two different points of view, namely term and term context. Experiments verify the performance improvements produced by these two models, but BNSR_CR shows more advantages than BNSR_TR model, because of its more accurate identification of term dependency.", "keywords": "sentence retrieval;bayesian network;term relationship;association rule mining", "title": "Exploration of term relationship for Bayesian network based sentence retrieval"} {"abstract": "Methods based on aerodynamics are developed to simulate and control the motion of objects in fluid flows. To simplify the physics for animation, the problem is broken down into two parts: a fluid flow regime and an object boundary regime. With this simplification one can approximate the realistic behaviour of objects moving in liquids or air. It also enables a simple way of designing and controlling animation sequences: from a set of flow primitives, an animator can design the spatial arrangement of flows, create flows around obstacles and direct flow timing. The approach is fast, simple, and is easily fitted into simulators that model objects governed by classical mechanics. The methods are applied to an animation that involves hundreds of flexible leaves being blown by wind currents.", "keywords": "spatial;simplification;aerodynamics;method;simulation;approximation;arrangement;design;fluid mechanics;leaves;object;direct;flow primitives;flow;flexibility;control motion design;physical;timing;model;animation;sequence;control;motion", "title": "animation aerodynamics"} {"abstract": "We present a Fourier-analytic approach to list-decoding Reed-Muller codes over arbitrary finite fields. We use this to show that quadratic forms over any field are locally list-decodable up to their minimum distance. The analogous statement for linear polynomials was proved in the celebrated works of Goldreich et al. Previously, tight bounds for quadratic polynomials were known only for q = 2 and 3; the best bound known for other fields was the Johnson radius. Departing from previous work on Reed-Muller decoding which relies on some form of self-corrector, our work applies ideas from Fourier analysis of Boolean functions to low-degree polynomials over finite fields, in conjunction with results about the weight-distribution. We believe that the techniques used here could find other applications, we present some applications to testing and learning.", "keywords": "codes;computational complexity;fourier transforms;polynomials", "title": "A Fourier-Analytic Approach to Reed-Muller Decoding"} {"abstract": "Assume a coevolutionary algorithm capable of storing and utilizing all phenotypes discovered during its operation, for as long as it operates on a problem; that is, assume an algorithm with a monotonically increasing knowledge of the search space. We ask: If such an algorithm were to periodically report, over the course of its operation, the best solution found so far, would the quality of the solution reported by the algorithm improve monotonically over time? To answer this question, we construct a simple preference relation to reason about the goodness of different individual and composite phenotypic behaviors. We then show that whether the solutions reported by the coevolutionary algorithm improve monotonically with respect to this preference relation depends upon the solution concept implemented by the algorithm. We show that the solution concept implemented by the conventional coevolutionary algorithm does not guarantee monotonic improvement; in contrast, the game-theoretic solution concept of Nash equilibrium does guarantee monotonic improvement. Thus, this paper considers 1) whether global and objective metrics of goodness can be applied to coevolutionary problem domains (possibly with open-ended search spaces), and 2) whether coevolutionary algorithms can, in principle, optimize with respect to such metrics and find solutions to games of strategy.", "keywords": "coevolution;solution concepts;monotonic progress", "title": "monotonic solution concepts in coevolution"} {"abstract": "Becoming trapped in suboptimal local minima is a perennial problem when optimizing visual models, particularly in applications like monocular human body tracking where complicated parametric models are repeatedly fitted to ambiguous image measurements. We show that trapping can be significantly reduced by building 'roadmaps' of nearby minima linked by transition pathways-paths leading over low 'mountain passes' in the cost surface-found by locating the transition state (codimension-1 saddle point) at the top of the pass and then sliding downhill to the next minimum. We present two families of transition-state-finding algorithms based on local optimization. In eigenvector tracking, unconstrained Newton minimization is modified to climb uphill towards a transition state, while in hypersurface sweeping, a moving hypersurface is swept through the space and moving local minima within it are tracked using a constrained Newton method. These widely applicable numerical methods, which appear not to be known in vision and optimization, generalize methods from computational chemistry where finding transition states is critical for predicting reaction parameters. Experiments on the challenging problem of estimating 3D human pose from monocular images show that our algorithms find nearby transition states and minima very efficiently, but also underline the disturbingly large numbers of minima that can exist in this and similar model based vision problems.", "keywords": "model based vision;global optimization;saddle points;3d human tracking", "title": "Building roadmaps of minima and transitions in visual models"} {"abstract": "This paper surveys our parallel algorithms for determining geometric properties of systems of moving objects . The properties investigated include nearest (farthest) neighbor, closest (farthest) pair, collision, convex hull, diameter, and containment. The models of computation include the CREW PRAM, mesh, and hypercube.", "keywords": "collision;mesh;systems;survey;parallel algorithm;convex hull;dynamic;parallel computation;hypercube;paper;model of computation;computational geometry;container", "title": "dynamic computational geometry on parallel computers"} {"abstract": "Single neuron activities from cortical areas of a monkey were recorded while performing a sensory-motor task (a choice reaction time task). Quantitative trial-by-trial analysis revealed that the timing of peak activity exhibited large variation from trial to trial, compared to the variation in the behavioral reaction time of the task. Therefore, we developed a multi-unit dynamic neural network model to investigate the effects of structure of neural connections on the variation of the timing of peak activity. Computer simulation of the model showed that, even though the units are connected in a cascade fashion, a wide variation exists in the timing of peak activity of neurons because of parallel organization of neural network within each unit.", "keywords": "single neuron;peak activity;neural model;simulation", "title": "Why does the single neuron activity change from trial to trial during sensory-motor task"} {"abstract": "We model the uncertainty in distributed design. We model the attitudes of design agents to develop novel collaboration indicators. Monte Carlo simulation is performed for heterogeneous design agents. Design conflicts of heterogeneous design agents are prevented. Design agent dominations are reduced to be coherent with agent characters.", "keywords": "collaborative design;distributed design;set-based design;conflict prevention;constraint satisfaction problem;agent attitude model", "title": "Preventing design conflicts in distributed design systems composed of heterogeneous agents"} {"abstract": "In this paper, we investigate the survivability of mobile wireless communication networks in the event of base station (BS) failure. A survivable network is modeled as a mathematical optimization problem in which the objective is to minimize the total amount of blocked traffic. We apply Lagrangean relaxation as a solution approach and analyze the experiment results in terms of the blocking rate, service rate, and CPU time. The results show that the total call blocking rate (CBR) is much less sensitive to the call blocking probability (CBP) threshold of each BS when the load is light, rather than heavy; therefore, the more traffic loaded, the less the service rate will vary. BS recovery is much more important when the network load is light. However, the BS recovery ratio (BSRR), which is a key factor in reducing the blocking rate for a small number of BSs, is more important when a system is heavily loaded. The proposed model provides network survivability subject to available resources. The model also fits capacity expansion requirements by locating mobile/portable BSs in the places they are most needed.", "keywords": "base station recovery;lagrangean relaxation;mathematical modeling;network survivability;performance evaluation;quality of service", "title": "Survivability and performance optimization of mobile wireless communication networks in the event of base station failure"} {"abstract": "A flow model is presented for predicting a hydraulic jump in a straight open channel. The model is based on the general 2D shallow water equations in strong conservation form, without artificial viscosity, which is usually incorporated into the flow equations to capture a hydraulic jump. The equations are discretised using the finite volume method. The results are compared with experimental data and available numerical results, and have shown that the present model can provide good results. The model is simple and easy to implement. To demonstrate the potential application of the model, several hydraulic jumps occurring in different situations are simulated, and the predictions are in good agreement with standard solution for open channel hydraulics. ", "keywords": "shallow water flow;finite volume method;hydraulic jump;open channel flow", "title": "2D shallow water flow model for the hydraulic jump"} {"abstract": "Before a patent application is made, it is important to search the appropriate databases for prior-art (i.e., pre-existing patents that may affect the validity of the application). Previous work on prior-art search has concentrated on single query representations of the patent application. In the following paper, we describe an approach which uses multiple query representations. We evaluate our technique using a well-known test collection (CLEF-IP 2011). Our results suggest that multiple query representations significantly outperform single query representations.", "keywords": "patent search;prior-art;collaborative filtering", "title": "Using multiple query representations in patent prior-art search"} {"abstract": "In this paper, we propose a model for Flexible Job Shop Scheduling Problem (FJSSP) with transportation constraints and bounded processing times. This is a NP hard problem. Objectives are to minimize the makespan and the storage of solutions. A genetic algorithm with tabu search procedure is proposed to solve both assignment of resources and sequencing problems on each resource. In order to evaluate the proposed algorithm's efficiency, five types of instances are tested. Three of them consider sequencing problems with or without assignment of processing or/and transport resources. The fourth and fifth ones introduce bounded processing times which mainly characterize Surface Treatment Facilities (STFs). Computational results show that our model and method are efficient for solving both assignment and scheduling problems in various kinds of systems.", "keywords": "flexible job shop scheduling problem with transportation;bounded processing times;genetic algorithm;tabu search;flexible manufacturing system;robotic cell;surface treatment facility;disjunctive graph", "title": "A genetic algorithm with tabu search procedure for flexible job shop scheduling with transportation constraints and bounded processing times"} {"abstract": "The purpose of the work described in this paper is to provide an intelligent intrusion detection system (IIDS) that uses two of the most popular data mining tasks, namely classification and association rules mining together for predicting different behaviors in networked computers. To achieve this, we propose a method based on iterative rule learning using a fuzzy rule-based genetic classifier. Our approach is mainly composed of two phases. First, a large number of candidate rules are generated for each class using fuzzy association rules mining, and they are pre-screened using two rule evaluation criteria in order to reduce the fuzzy rule search space. Candidate rules obtained after pre-screening are used in genetic fuzzy classifier to generate rules for the classes specified in IIDS: namely Normal, PRB-probe, DOS-denial of service, U2R-user to root and R2L-remote to local. During the next stage, boosting genetic algorithm is employed for each class to find its fuzzy rules required to classify data each time a fuzzy rule is extracted and included in the system. Boosting mechanism evaluates the weight of each data item to help the rule extraction mechanism focus more on data having relatively more weight, i.e., uncovered less by the rules extracted until the current iteration. Each extracted fuzzy rule is assigned a weight. Weighted fuzzy rules in each class are aggregated to find the vote of each class label for each data item.", "keywords": "intrusion detection;genetic classifier;fuzziness;data mining;weighted fuzzy rules", "title": "Intrusion detection by integrating boosting genetic fuzzy classifier and data mining criteria for rule pre-screening"} {"abstract": "In this paper, we propose an Adaptation Decision-Taking Engine (ADTE) that targets the delivery of scalable video content in mobile usage environments. Our ADTE design relies on an objective perceptual quality metric in order to achieve video adaptation according to human visual perception, thus allowing to maximize the Quality of Service (QoS). To describe the characteristics of a particular usage environment, as well as the properties of the scalable video content, MPEG-21 Digital Item Adaptation (DIA) is used. Our experimental results show that the proposed ADTE design provides video content with a higher subjective quality than an ADTE using the conventional maximum-bit-allocation method.", "keywords": "adaptation;adte;quality metric;subjective quality;svc", "title": "An Objective Perceptual Quality-Based ADTE for Adapting Mobile SVC Video Content"} {"abstract": "With rapid advancement in Internet technology and usages, some emerging applications in data communications and network security require matching of huge volume of data against large signature sets with thousands of strings in real time. In this article, we present a memory-efficient hardware implementation of the well-known Aho-Corasick (AC) string-matching algorithm using a pipelining approach called P-AC. An attractive feature of the AC algorithm is that it can solve the string-matching problem in time linearly proportional to the length of the input stream, and the computation time is independent of the number of strings in the signature set. A major disadvantage of the AC algorithm is the high memory cost required to store the transition rules of the underlying deterministic finite automaton. By incorporating pipelined processing, the state graph is reduced to a character trie that only contains forward edges. Together with an intelligent implementation of look-up tables, the memory cost of P-AC is only about 18 bits per character for a signature set containing 6,166 strings extracted from Snort. The control structure of P-AC is simple and elegant. The cost of the control logic is very low. With the availability of dual-port memories in FPGA devices, we can double the system throughput by duplicating the control logic such that the system can process two data streams concurrently. Since our method is memory-based, incremental changes to the signature set can be accommodated by updating the look-up tables without reconfiguring the FPGA circuitry.", "keywords": "algorithms;design;performance;security;string-matching;deterministic and nondeterministic finite automaton;pipelined processing;intrusion detection system", "title": "A Memory-Efficient Pipelined Implementation of the Aho-Corasick String-Matching Algorithm"} {"abstract": "The burgeoning of the Internet has enormous potential for bringing scientific research into the hands of both health practitioners and health researchers to enhance their job performance. In this article, the authors give two examples of how carefully developed and organized online resources can leverage the engaging multimedia formats, ubiquitous access, and low cost of the Internet to address this goal. The article describes two new online suites of social and behavioral science-based resources designed for those in the HIV/AIDS and teen pregnancy prevention fields: HIV Research and Practice Resources and Teen Pregnancy Research and Practice Resources. Each online suite includes research data, survey instruments, prevention resources, and evaluation-related publications and tools that can enhance prevention research and practice. The article ends by peering into the future at how the field of health-related prevention and research might be further advanced using the Internet.", "keywords": "science-based resources;internet;hiv/aids;prevention;health;research;practice", "title": "Development of online suites of social science-based resources for health researchers and practitioners"} {"abstract": "While developing systems, software engineers generally have to deal with a large number of design alternatives. Current object-oriented methods aim to eliminate design alternatives whenever they are generated. Alternatives, however, should be eliminated only when sufficient information to take such a decision is available. Otherwise, alternatives have to be preserved to allow further refinements along the development process. Too early elimination of alternatives results in loss of information and excessive restriction of the design space. This paper aims to enhance the current object-oriented methods by modeling and controlling the design alternatives through the application of fuzzy-logic-based techniques. By using an example method, it is shown that the proposed approach increases the adaptability and reusability of design models. The method has been implemented and tested in our experimental CASE environment. ", "keywords": "design alternatives;object-oriented methods;fuzzy logic;adaptable design models;case environments;software artifacts", "title": "Deferring elimination of design alternatives in object-oriented methods"} {"abstract": "This paper describes the development of a fiber reinforced concrete (FRC) unit cell for analyzing concrete structures by executing a multiscale analysis procedure using the theory of homogenization. This was achieved through solving a periodic unit cell problem of the material in order to evaluate its macroscopic properties. Our research describes the creation of an FRC unit cell through the use of concrete paste generic information e.g. the percentage of aggregates, their distribution, and the percentage of fibers in the concrete. The algorithm presented manipulates the percentage and distribution of these aggregates along with fiber weight to create a finite element unit cell model of the FRC which can be used in a multiscale analysis of concrete structures.", "keywords": "frc-fibered reinforced concrete;multiscale analysis;concrete unit cell;elastic properties;mesoscale concrete finite element model", "title": "Fiber reinforced concrete properties - a multiscale approach"} {"abstract": "The paper describes a new space of variable degree polynomials. This space is isomorphic to P(6), possesses a Bernstein like basis and has generalized tension properties in the sense that, for limit values of the degrees, its functions approximate quadratic polynomials. The corresponding space of C(3), variable degree splines is also studied. This spline space can be profitably used in the construction of shape preserving curves or surfaces.", "keywords": "variable degree polynomials;bernstein basis;b-splines;shape preservation", "title": "NEW SPLINE SPACES WITH GENERALIZED TENSION PROPERTIES"} {"abstract": "Programs devoted to the analysis of protein sequences exist either as stand-alone programs or as Web servers. However, stand-alone programs can hardly accommodate for the analysis that involves comparisons on databanks, which require regular updates. Moreover, Web servers cannot be as efficient as stand-alone programs when dealing with real-time graphic display. We describe here a stand-alone software program called ANTHEPROT, which is intended to perform protein sequence analysis with a high integration level and clients/server capabilities. It is an interactive program with a graphical user interface that allows handling of protein sequence and data in a very interactive and convenient manner. It provides many methods and tools, which are integrated into a graphical user interface. ANTHEPROT is available for Windows-based systems. It is able to connect to a Web server in order to perform large-scale sequence comparison on up-to-date databanks. ANTHEPROT is freely available to academic users and may be downloaded at http://pbil.ibcp.fr/ANTHEPROT.", "keywords": "protein sequence analysis;multiple alignment;secondary structure prediction;web server", "title": "ANTHEPROT: An integrated protein sequence analysis software with client/server capabilities"} {"abstract": "In recent times, dealing with deaths associated with cardiovascular diseases (CVD) has been one of the most challenging issues. The usage of mobile phones and portable Electrocardiogram (ECG) acquisition devices can mitigate the risks associated with CVD by providing faster patient diagnosis and patient care. The existing technologies entail delay in patient authentication and diagnosis. However, for the cardiologists minimizing the delay between a possible CVD symptom and patient care is crucial, as this has a proven impact in the longevity of the patient. Therefore, every seconds counts in terms of patient authentication and diagnosis. In this paper, we introduce the concept of Cardioid based patient authentication and diagnosis. According to our experimentations, the authentication time can be reduced from 30.64 s (manual authentication in novice mobile user) to 0.4398 s (automated authentication). Our ECG based patient authentication mechanism is up to 4878 times faster than conventional biometrics like, face recognition. The diagnosis time could be improved from several minutes to less than 0.5 s (cardioid display on a single screen). Therefore, with our presented mission critical alerting mechanism on wireless devices, minute's worth of tasks can be reduced to second's, without compromising the accuracy of authentication and quality of diagnosis. ", "keywords": "mission critical alerting;cardiovascular disease detection;remote monitoring;wireless monitoring;patient authentication;cardioid", "title": "Cardioids-based faster authentication and diagnosis of remote cardiovascular patients"} {"abstract": "In this article we focus on multiprocessor system-on- chip ( MPSoC) architectures for human heart electrocardiogram ( ECG) real time analysis as a hardware/ software ( HW/SW) platform offering an advance relative to state-of-the- art solutions. This is a relevant biomedical application with good potential market, since heart diseases are responsible for the largest number of yearly deaths. Hence, it is a good target for an application-specific system-on- chip (SoC) and HW/ SW codesign. We investigate a symmetric multiprocessor architecuture based on STMicroelecronics VLIW DSPs that process in real time 12-lead ECG signals. This architecture improves upon state-of-the-art SoC designs for ECG analysis in its ability to analyze the full 12 leads in real time, even with high sampling frequencies, and its ability to detect heart malfunction for the whole ECG signal interval. We explore the design space by considering a number of hardware and software architectural options. Comparing our design with present-day solutions from an SoC and application point-ofview shows that our platform can be used in real time and without failures.", "keywords": "performance;design;experimentation", "title": "A multiprocessor system-on-chip for real-time biomedical monitoring and analysis: ECG prototype architectural design space exploration"} {"abstract": "Color descriptors are one of the important features used in content-based image retrieval. The dominant color descriptor (DCD) represents a few perceptually dominant colors in an image through color quantization. For image retrieval based on DCD, the earth mover's distance (EMD) and the optimal color composition distance were proposed to measure the dissimilarity between two images. Although providing good retrieval results, both methods are too time-consuming to be used in a large image database. To solve the problem, we propose a new distance function that calculates an approximate earth mover's distance in linear time. To calculate the dissimilarity in linear time, the proposed approach employs the space-filling curve for multidimensional color space. To improve the accuracy, the proposed approach uses multiple curves and adjusts the color positions. As a result, our approach achieves order-of-magnitude time improvement but incurs small errors. We have performed extensive experiments to show the effectiveness and efficiency of the proposed approach. The results reveal that our approach achieves almost the same results with the EMD in linear time.", "keywords": "earth mover's distance;approximation;content-based image retrieval", "title": "Accurate Approximation of the Earth Mover's Distance in Linear Time"} {"abstract": "The use of the normalized maximum likelihood (NML) for model selection in Gaussian linear regression poses troubles because the normalization coefficient is not finite. The most elegant solution has been proposed by Rissanen and consists in applying a particular constraint for the data space. In this paper, we demonstrate that the methodology can be generalized, and we discuss two particular cases, namely the rhomboidal and the ellipsoidal constraints. The new findings are used to derive four NML-based criteria. For three of them which have been already introduced in the previous literature, we provide a rigorous analysis. We also compare them against five state-of-the-art selection rules by conducting Monte Carlo simulations for families of models commonly used in signal processing. Additionally, for the eight criteria which are tested, we report results on their predictive capabilities for real life data sets.", "keywords": "gaussian linear regression;model selection;normalized maximum likelihood;rhomboidal constraint;ellipsoidal constraint", "title": "Variable selection in linear regression: Several approaches based on normalized maximum likelihood"} {"abstract": "The multi-dimensional BlackScholes equation is solved numerically for a European call basket option using a prioria posteriori error estimates. The equation is discretized by a finite difference method on a Cartesian grid. The grid is adjusted dynamically in space and time to satisfy a bound on the global error. The discretization errors in each time step are estimated and weighted by the solution of the adjoint problem. Bounds on the local errors and the adjoint solution are obtained by the maximum principle for parabolic equations. Comparisons are made with Monte Carlo and quasi-Monte Carlo methods in one dimension, and the performance of the method is illustrated by examples in one, two, and three dimensions.", "keywords": "blackscholes equation;finite difference method;space adaptation;time adaptation;maximum principle", "title": "Spacetime adaptive finite difference method for European multi-asset options"} {"abstract": "Let alpha = (a(1), a(2),...) be a sequence (finite or infinite) of integers with a(1) >= 0 and a(n) >= 1, for all n >= 2. Let {a, b} be an alphabet. For n >= 1, and r = r(1)r(2)...r(n) is an element of N(n) with 0 = 2. Many interesting combinatorial properties of alpha-words have been studied by Chuan. In this paper, we obtain some new methods of generating the distinct alpha-words of the same order in lexicographic order. Among other results, we consider another function r bar right arrow w[r] from the set of labels of alpha-words to the set of alpha-words. The string r is called a new label of the alpha-word w[r]. Using any new label of an nth-order alpha-word w, we can compute the number of the nth-order alpha-words that are less than w in the lexicographic order. With the radix orders <(r) on N(n) (regarding N as an alphabet) and {a, b}(+) with a <(r) b, we prove that there exists a subset D of the set of all labels such that w[r] <(r) w[s] whenever r, s is an element of D and r <(r) S. ", "keywords": "alpha-word;radix order;lexicographic order", "title": "alpha-words and the radix order"} {"abstract": "A number of efficiency-based vendor selection and negotiation models have been developed to deal with multiple attributes including price, quality and delivery performance. The efficiency is defined as the ratio of weighted outputs to weighted inputs. By minimizing the efficiency, Talluri [Eur. J. Operat. Res. 143(1) (2002) 171] proposes a buyerseller game model that evaluates the efficiency of alternative bids with respect to the ideal target set by the buyer. The current paper shows that this buyerseller game model is closely related to data envelopment analysis (DEA) and can be simplified. The current paper also shows that setting the (ideal) target actually incorporates implicit tradeoff information on the multiple attributes into efficiency evaluation. We develop a new buyerseller game model where the efficiency is maximized with respect to multiple targets set by the buyer. The new model allows the buyer to evaluate and select the vendors in the context of best-practice. By both minimizing and maximizing efficiency, the buyer can obtain an efficiency range within which the true efficiency lies given the implicit tradeoff information characterized by the targets. The current study establishes the linkage between buyerseller game models and DEA. Such a linkage can provide the buyer with correct evaluation methods based upon existing DEA models regarding the nature of bidding.", "keywords": "game models;linear programming;efficiency;data envelopment analysis", "title": "A buyerseller game model for selection and negotiation of purchasing bids: Extensions and new models"} {"abstract": "A Bayesian latent variable model with classification and regression tree approach is built to overcome three challenges encountered by a bank in credit-granting process. These three challenges include (1) the bank wants to predict the future performance of an applicant accurately; (2) given current information about cardholders credit usage and repayment behavior, financial institutions would like to determine the optimal credit limit and APR for an applicant; and (3) the bank would like to improve its efficiency by automating the process of credit-granting decisions. Data from a leading bank in Taiwan is used to illustrate the combined approach. The data set consists of each credit card holders credit usage and repayment data, demographic information, and credit report. Empirical study shows that the demographic variables used in most credit scoring models have little explanatory ability with regard to a cardholders credit usage and repayment behavior. A cardholders credit history provides the most important information in credit scoring. The continuous latent customer quality from the Bayesian latent variable model allows considerable latitude for producing finer rules for credit granting decisions. Compared to the performance of discriminant analysis, logistic regression, neural network, multivariate adaptive regression splines (MARS) and support vector machine (SVM), the proposed model has a 92.9% accuracy rate in predicting customer types, is less impacted by prior probabilities, and has a significantly low Type I errors in comparison with the other five approaches.", "keywords": "behavior scoring;credit scoring;bayesian;latent variable model;classification and regression tree", "title": "A Bayesian latent variable model with classification and regression tree approach for behavior and credit scoring"} {"abstract": "Today there is a need to make process and production planning more cost-effective while not compromising the quality of the product. Manufacturing requirements are used to ensure producibility in early development phases and also as a source for continuous improvement of the manufacturing system. To make this possible it is essential to have correct, updated information available and to be able to trace the relations between requirements and their origin and subjects. To trace requirements' origin in resources or processes is today very difficult owing to system integration problems. This article discusses the relations that need to be represented and proposes the use of model-based methods to enable traceability of requirements. Because requirements are a collaborative effort a standard for information exchange is needed. The ISO10303 STEP application protocol AP233 System Engineering is proposed for this purpose.", "keywords": "information management;manufacturing;requirements", "title": "The representation of manufacturing requirements in model-driven parts manufacturing"} {"abstract": "This research applies a new heuristic combined with a genetic algorithm (GA) to the task of logic minimization for incompletely specified data, with both single and multi-outputs, using the Generalized ReedMuller (GRM) equation form. The GRM equation type is a canonical expression of the Exclusive-Or Sum-of-Products (ESOPs) type, in which for every subset of input variables there exists not more than one term with arbitrary polarities of all variables. This ANDEXOR implementation has been shown to be economical, generally requiring fewer gates and connections than that of ANDOR logic. GRM logic is also highly testable, making it desirable for FPGA designs. The minimization results of this new algorithm tested on a number of binary benchmarks are given. This minimization algorithm utilizes a GA with a two-level fitness calculation, which combines human-designed heuristics with the evolutionary process, employing Baldwinian learning. In this algorithm, first a pure GA creates certain constraints for the selection of chromosomes, creating only genotypes (polarity vectors). The phenotypes (GRMs) are then learned in the environment and contribute to the GA fitness (which is the total number of terms of the best GRM for each output), providing indirect feedback as to the quality of the genotypes (polarity vectors) but the genotype chromosomes (polarity vectors) remain unchanged. In this process, the improvement in genotype chromosomes (polarity vectors) is the product of the evolutionary processes from the GA only. The environmental learning is achieved using a human-designed GRM minimization heuristic. As much previous research has presented the merit of ANDEXOR logic for its high density and testability, this research is the first application of the GRM (a canonical ANDEXOR form) to the minimization of incompletely specified data.", "keywords": "incompletely specified generalized reedmuller forms;andexor forms;logic synthesis and minimization;baldwinian learning;genetic algorithms", "title": "Baldwinian learning utilizing genetic and heuristic algorithms for logic synthesis and minimization of incompletely specified data with Generalized ReedMuller (ANDEXOR) forms"} {"abstract": "Overlapping domain decomposition methods, otherwise known as overset grid or chimera methods, are useful for simplifying the discretization of partial differential equations in or around complex geometries. Though in wide use, such methods are prone to numerical instability unless numerical diffusion or some other form of regularization is used, especially for higher-order methods. To address this shortcoming, high-order, provably energy stable, overlapping domain decomposition methods are derived for hyperbolic initial boundary value problems. The overlap is treated by splitting the domain into pieces and using generalized summation-by-parts derivative operators and polynomial interpolation. New implicit and explicit operators are derived that do not require regularization for stability in the linear limit. Applications to linear and nonlinear problems in one and two dimensions are presented, where it is found the explicit operators are preferred to the implicit ones.", "keywords": "high order finite difference methods;overlapping domain decomposition;numerical stability;generalized summation-by-parts", "title": "Energy stable numerical methods for hyperbolic partial differential equations using overlapping domain decomposition"} {"abstract": "As a leading partitional clustering technique, k-modes is one of the most computationally efficient clustering methods for categorical data. In the k-modes, a cluster is represented by a \"mode,\" which is composed of the attribute value that occurs most frequently in each attribute domain of the cluster, whereas, in real applications, using only one attribute value in each attribute to represent a cluster may not be adequate as it could in turn affect the accuracy of data analysis. To get rid of this deficiency, several modified clustering algorithms were developed by assigning appropriate weights to several attribute values in each attribute. Although these modified algorithms are quite effective, their convergence proofs are lacking. In this paper, we analyze their convergence property and prove that they cannot guarantee to converge under their optimization frameworks unless they degrade to the original k-modes type algorithms. Furthermore, we propose two different modified algorithms with weighted cluster prototypes to overcome the shortcomings of these existing algorithms. We rigorously derive updating formulas for the proposed algorithms and prove the convergence of the proposed algorithms. The experimental studies show that the proposed algorithms are effective and efficient for large categorical datasets.", "keywords": "clustering;k-modes type clustering algorithms;categorical data;weighted cluster prototype;convergence", "title": "The Impact of Cluster Representatives on the Convergence of the K-Modes Type Clustering"} {"abstract": "This paper examines the effectiveness of the implementation of enterprise resource planning (ERP) in improving service quality in the Taiwanese semiconductor industry by assessing the expectations and the perceptions of service quality from the perspectives of both upstream manufacturers and downstream customers. The study first establishes a modified service quality gap model incorporating: (i) the downstream customers' expectations and perceptions, and (ii) the upstream manufacturers' perceptions of the customers' expectations and perceptions. An empirical study by questionnaire survey is then undertaken to investigate the gaps proposed in the research model. The results show that service quality gaps do exist in the Taiwanese semiconductor industry between upstream manufacturers that are implementing ERP and their downstream customers. The study shows that the proposed model provides valuable guidance to manufacturers with respect to the prevention, detection, and elimination of the demonstrated service quality gaps. The model thus helps manufacturers to evaluate the contribution of various ERP modules to improved customer satisfaction with service quality and also provides guidance on improvement strategies to enhance service quality by eliminating quality gaps. ", "keywords": "enterprise resource planning ;semiconductor industry;service quality gaps;erp implementation", "title": "Service quality and ERP implementation: A conceptual and empirical study of semiconductor-related industries in Taiwan"} {"abstract": "We consider approximation algorithms for the problem of computing an inscribed rectangle having largest area in a convex polygon on n vertices. If the order of the vertices of the polygon is given, we present a randomized algorithm that computes an inscribed rectangle with area at least (1) ( 1) times the optimum with probability t in time O ( 1 ? log n ) for any constant t<1 t < 1 . We further give a deterministic approximation algorithm that computes an inscribed rectangle of area at least (1) ( 1) times the optimum in running time O ( 1 ? 2 log n ) and show how this running time can be slightly improved.", "keywords": "approximation algorithms;geometric algorithms;largest area rectangle;inscribed rectangles in polygons", "title": "Largest inscribed rectangles in convex polygons"} {"abstract": "We propose an effective node-selection scheme in the stream environment of solar-powered WSNs. We analyzed the stream environment including single stream and cross-stream cases. The deployment conditions are appropriate to each stream case. Based on the node selection scheme, the number of active nodes and transmitted packets is minimized. The proposed scheme prolongs the lifetime of the solar-powered WSN in a stream environment.", "keywords": "sensor deployment;node-selection;stream environment;solar-powered sensor;wireless sensor network", "title": "An effective node-selection scheme for the energy efficiency of solar-powered WSNs in a stream environment"} {"abstract": "In recent works of the author [found Phys 36 (2006) 1701-1717, Math Comput simul 74 (2007) 93-103], the argument has been made that Hertz's equations of electrodynamics reflect the material invariance (indifference) of the latter. Then the principle of material invariance was postulated in heu of Lorentz covariance. and the respective absolute medium wits named the metacontinuum Here. we go further to assume that the metacontinuum is a very thin but very stuff 3D hypershell in the 4D space The equation for the deflection of the shell along the fourth dimension is the \"master\" nonlinear dispersive equation of wave mechanics whose linear part (Euler-Bernoulli equation) is nothing else but the Schrodinger wave equation written for the real or the imaginary part of the wave function. The wave function has a clear non-probabilistic interpretation as the actual amplitude of the flexural deformation The \"master\" equation admits solitary-wave solutions/solutions that behave as quasi-particles (QPs). We stipulate that particles are our perception of the QPs (schaumkommen in Schrodinger's own words). We show the passage from the continuous Lagrangian of the field to the discrete Lagrangian of the centers of QPs and introduce the concept of (pseudo)mass. We interpret the membrane tension as all attractive (gravitational?) force acting between the QPs. Thus. it self-consistent unification of electrodynamics, wave mechanics, gravitation. and the wave-particle duality is achieved ", "keywords": "luminiferous metacontinuum;maxwell-hertz electrodynamics;schrodinger wave mechanics;quasi-particles;particle-wave duality", "title": "The concept of a quasi-particle and the non-probabilistic interpretation of wave mechanics"} {"abstract": "Function models are frequently used in engineering design to describe the technical functions that a product performs. This paper investigates the use of the functional basis, a function vocabulary developed to aid in communication and archiving of product function information, in describing consumer products that have been decomposed, analyzed, modeled functionally, and stored in a Web-based design repository. The frequency of use of function terms and phrases in 11 graphical and 110 list-based representations in the repository is examined and used to analyze the organization and expressiveness of the functional basis and function models. Within the context of reverse engineering, we determined that the modeling resolution provided by the hierarchical levels, especially the tertiary level, is inadequate for function modeling; the tertiary terms are inappropriate for capturing sufficient details desired by modelers for archiving and reuse, and there is a need for a more expressive flow terms and flow qualifiers in the vocabulary. A critical comparison is also presented of two representations in the design repository: function structures and function lists. The conclusions are used to identify new research opportunities, including the extension of the vocabulary to incorporate flow qualifiers in addition to more expressive terms.", "keywords": "functional basis;function model;function representation;vocabulary", "title": "An empirical study of the expressiveness of the functional basis"} {"abstract": "In this paper, a 6.7-kbps vector sum excited linear prediction (VSELP) coder with less computational complexity is presented. A very efficient VSELP codebook with nine basis vectors and a heuristic K-selection method (to reduce the search space and complexity) is constructed to obtain the stochastic codebook vector. The nine basis vectors are obtained by optimizing a set of randomly generated basis vectors. During the optimization process, we have trained the basis vectors to give the system apriori knowledge of the characteristics of the input. The coder is implemented on a TMS320C541 digital signal processor. The performance is evaluated by testing the 6.7-kbps VSELP coder with different test speech data taken from different speakers. The quality of the coder is estimated by comparing the performance of the 6.7-kbps VSELP coder with an 8-kbps VSELP speech coder based on the IS-54 standards. ", "keywords": "vector sum excited linear prediction;code excited linear prediction;linear predictive coding;digital signal processor", "title": "A 6.7 kbps vector sum excited linear prediction on TMS320C54X digital signal processor"} {"abstract": "A design-for-digital-testability (DfDT) switched-capacitor circuit structure for testing Sigma-Delta modulators with digital stimuli is presented to reduce the overall testing cost. In the test mode, the DfDT circuits are reconfigured as a one-bit digital-to-charge converter to accept a repetitively applied Sigma-Delta modulated bit-stream as its stimulus. The single-bit characteristic ensures that the generated stimulus is nonlinearity free. In addition, the proposed DfDT structure reuses most of the analog components in the test mode and keeps the same loads for the operational amplifiers as if they were in the normal mode. It thereby achieves many advantages including lower cost, higher fault coverage, higher measurement accuracy, and the capability of performing at-speed tests. A second-order Sigma-Delta modulator was designed and fabricated to demonstrate the effectiveness of the DfDT structure. Our experimental results show that the digital test is able to measure a harmonic distortion lower than -106 dBFS. Meanwhile, the dynamic range measured with the digital stimulus is as high as 84.4 dB at an over-sampling ratio of 128. The proposed DfDT scheme can be easily applied to other types of Sigma-Delta modulators, making them also digitally testable.", "keywords": "analog-to-digital converter ;design-for-testability ;digitally testable;mixed-signal circuit testing;sigma-delta modulator", "title": "A design-for-digital-testability circuit structure for Sigma-Delta modulators"} {"abstract": "Exploring local community structure is an appealing problem that has drawn much recent attention in the area of social network analysis. As the complete information of network is often difficult to obtain, such as networks of web pages, research papers and Facebook users, people can only detect community structure from a certain source vertex with limited knowledge of the entire graph. The existing approaches do well in measuring the community quality, but they are largely dependent on source vertex and putting too strict policy in agglomerating new vertices. Moreover, they have predefined parameters which are difficult to obtain. This paper proposes a method to find local community structure by analyzing link similarity between the community and the vertex. Inspired by the fact that elements in the same community are more likely to share common links, we explore community structure heuristically by giving priority to vertices which have a high link similarity with the community. A three-phase process is also used for the sake of improving quality of community structure. Experimental results prove that our method performs effectively not only in computer-generated graphs but also in real-world graphs.", "keywords": "social network analysis;community detection;link similarity", "title": "Local Community Detection Using Link Similarity"} {"abstract": "Objectives: To heighten awareness about the critical issues currently affecting patient care and to propose solutions based on leveraging information technologies to enhance patient care and influence a culture of patient safety. Methods: Presentation and discussion of the issues affecting health care today, such as medical and medication-related errors and analysis of their root causes; proliferation of medical knowledge and medical technologies; initiatives to improve patient safety; steps necessary to develop a culture of safety; introduction of relevant enabling technologies; and evidence of results. Results and Conclusions: Medical errors affect not only mortality and morbidity, but they also create secondary costs leading to dissatisfaction by both provider and patient. Health care has been slow to acknowledge the benefits of enabling technologies to affect the quality of care. Evaluation of recent applications, such as the computerized patient record, physician order entry, and computerized alerting systems show tremendous potential to enhance patient care and influence the development of a culture focused on safety. They will also bring about changes in other areas, such as workflow and the creation of new partnerships among providers, patients, and payers.", "keywords": "medical errors;patient safety;information technology;computerized patient record;physician order entry;clinical decision support;clinical outcomes;evidence-based medicine", "title": "Leveraging information technology towards enhancing patient care and a culture of safety in the US"} {"abstract": "The simulation of the emission of beta-delayed gamma rays following nuclear fission and the calculation of time-dependent energy spectra is a computational challenge. The widely used radiation transport code MCNPX includes a delayed gamma-ray routine that is inefficient and not suitable for simulating complex problems. This paper describes the code MMAPDNG (Memory-Mapped Delayed Neutron and Gamma), an optimized delayed gamma module written in C, discusses usage and merits of the code, and presents results. The approach is based on storing required Fission Product Yield (FPY) data, decay data, and delayed particle data in a memory-mapped file. When compared to the original delayed gamma-ray code in MCNPX, memory utilization is reduced by two orders of magnitude and the ray sampling is sped up by three orders of magnitude. Other delayed particles such as neutrons and electrons can be implemented in future versions of MMAPDNG code using its existing framework.", "keywords": "delayed gamma;fission products;mcnpx;mmap", "title": "MMAPDNG: A new, fast code backed by a memory-mapped database for simulating delayed-ray emission with MCNPX package"} {"abstract": "High temperature co-fired ceramics (HTCCs) have wide applications with stable mechanical properties, but they have not yet been used to fabricate sensors. By introducing the wireless telemetric sensor system and ceramic structure embedding a pressure-deformable cavity, the designed sensors made from HTCC materials (zirconia and 96% alumina) are fabricated, and their capacities for the pressure measurement are tested using a wireless interrogation method. Using the fabricated sensor, a study is conducted to measure the atmospheric pressure in a sealed vessel. The experimental sensitivity of the device is 2 Hz/Pa of zirconia and 1.08 Hz/Pa of alumina below 0.5 MPa with a readout distance of 2.5 cm. The described sensor technology can be applied for monitoring of atmospheric pressure to evaluate important component parameters in harsh environments.", "keywords": "high temperature co-fired ceramic ;wireless;micro-electro-mechanical systems ", "title": "Measurement of wireless pressure sensors fabricated in high temperature co-fired ceramic MEMS technology"} {"abstract": "In this paper, we propose a new real time rectification technique based on the compressed lookup table. To compress the lookup table we adopt a differential encoding. As a result, we successfully constructed the rectification with obtaining the compression ratio of 73% so as to fulfill real-time requirement (i.e., 40 fps at 74.25Mhz). Furthermore, our result on performance is comparable to the result of [17] that obtains 85fps at 90MHz for 640x512 images.", "keywords": "data compression;rectification;differential encoding;real time;lookup table", "title": "real time rectification using differentially encoded lookup table"} {"abstract": "We develop a maximum likelihood regression tree-based model to predict subway incident delays. An AFT model is assigned to each terminal node in the maximum likelihood regression tree. Our tree-based model outperforms the traditional AFT models with fixed and random effects. Our tree-based model can account for the heterogeneity effect as well as avoid the over-fitting problem.", "keywords": "subway incidents;delay;maximum likelihood regression tree;accelerated failure time", "title": "Development of a maximum likelihood regression tree-based model for predicting subway incident delay"} {"abstract": "Inverse frequent set mining (IFM) is the problem of computing a transaction database D satisfying given support constraints for some itemsets, which are typically the frequent ones. This article proposes a new formulation of IFM, called IFMI (IFM with infrequency constraints), where the itemsets that are not listed as frequent are constrained to be infrequent; that is, they must have a support less than or equal to a specified unique threshold. An instance of IFMI can be seen as an instance of the original IFM by making explicit the infrequency constraints for the minimal infrequent itemsets, corresponding to the so-called negative generator border defined in the literature. The complexity increase from PSPACE (complexity of IFM) to NEXP (complexity of IFMI) is caused by the cardinality of the negative generator border, which can be exponential in the original input size. Therefore, the article introduces a specific problem parameter. that computes an upper bound to this cardinality using a hypergraph interpretation for which minimal infrequent itemsets correspond to minimal transversals. By fixing a constant k, the article formulates a k-bounded definition of the problem, called k-IFMI, that collects all instances for which the value of the parameter. is less than or equal to k-its complexity is in PSPACE as for IFM. The bounded problem is encoded as an integer linear program with a large number of variables (actually exponential w.r.t. the number of constraints), which is thereafter approximated by relaxing integer constraints-the decision problem of solving the linear program is proven to be in NP. In order to solve the linear program, a column generation technique is used that is a variation of the simplex method designed to solve large-scale linear programs, in particular with a huge number of variables. The method at each step requires the solution of an auxiliary integer linear program, which is proven to be NP hard in this case and for which a greedy heuristic is presented. The resulting overall column generation solution algorithm enjoys very good scaling as evidenced by the intensive experimentation, thereby paving the way for its application in real-life scenarios.", "keywords": "algorithms;experimentation;theory;frequent itemset mining;inverse problem;minimal hypergraph transversals;column generation simplex", "title": "Solving Inverse Frequent Itemset Mining with Infrequency Constraints via Large-Scale Linear Programs"} {"abstract": "A simple and efficient local optimization-based procedure for node repositioning/smoothing of three-dimensional tetrahedral meshes is presented. The initial tetrahedral mesh is optimized with respect, to a specified element shape measure by chaos search algorithm, which is very effective for the optimization problems with only a few design variables. Examples show that the presented smoothing procedure can provide favorable conditions for local transformation approach and the quality of mesh can be significantly improved by the combination of these two procedures with respect to a specified element shape measure. Meanwhile, several commonly used shape measures for tetrahedral element, which are considered to be equivalent in some weak sense over a long period of time, are briefly re-examined in this paper. Preliminary study indicates that using different measures to evaluate the change of element shape will probably lead to inconsistent result for both well shaped and poorly shaped elements. The proposed smoothing approach can be utilized as an appropriate and effective tool for evaluating element shape measures and their influence on mesh optimization process and optimal solution.", "keywords": "mesh optimization;smoothing;chaos search algorithm;element shape measure", "title": "An efficient optimization procedure for tetrahedral meshes by chaos search algorithm"} {"abstract": "People involved in assisted reproduction frequently make decisions about which of several embryos to implant or which of several embryos to reduce from a multiple pregnancy. Yet, others have raised questions about the ethical acceptability of using sex or genetic characteristics as selection criteria. This paper reviews arguments for rejecting embryo selection and discusses the subject of choosing offspring in terms of the centrality of liberty and autonomous choice in ethics. It also presents a position on the acceptable scope of embryo selection and the professional responsibilities of those who practice reproductive medicine.", "keywords": "ethics;embryos;sex selection;reproductive choice;liberty;disabilities;discrimination", "title": "Ethical Issues in Selecting Embryos"} {"abstract": "The success of firms engaged in e-commerce depends on their ability to understand and exploit the dynamics of the market. One component of this is the ability to extract maximum profit and minimize costs in the face of the harsh competition that the internet provides. We present a general framework for modeling the competitive equilibrium across two firms, or across a firm and the market as a whole. Within this framework, we study pricing choices and analyze the decision to outsource IT capability. Our framework is novel in that it allows for any number of distributions on usage levels, priceQoS tradeoffs, and price and cost structures.", "keywords": "e-commerce;non-cooperative nash equilibrium;pricing;qos;outsourcing", "title": "Competitive equilibrium in e-commerce: Pricing and outsourcing"} {"abstract": "The newly developed immersed object method (IOM) [Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady incompressible viscous flows around moving rigid bodies using an immersed object method with overlapping grids. J Comput Phys 2005; 207(l): 151-72] is extended for 3D unsteady flow simulation with fluid-structure interaction (FSI), which is made possible by combining it with a parallel unstructured multigrid Navier-Stokes solver using a matrix-free implicit dual time stepping and finite volume method [Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady three-dimensional incompressible viscous flow using an unstructured multigrid method. In: The second M.I.T. conference on computational fluid and solid mechanics, June 17-20, MIT, Cambridge, MA 02139, USA, 2003; Tai CH, Zhao Y, Liew KM. Parallel computation of unsteady three-dimensional incompressible viscous flow using an unstructured multigrid method, Special issue on \"Preconditioning methods: algorithms, applications and software environments. Comput Struct 2004; 82(28): 2425-36]. This uniquely combined method is then employed to perform detailed study of 3D unsteady flows with complex FSI. In the IOM, a body force term F is introduced into the momentum equations during the artificial compressibility (AC) sub-iterations so that a desired velocity distribution V-0 can be obtained on and within the object boundary, which needs not coincide with the grid, by adopting the direct forcing method. An object mesh is immersed into the flow domain to define the boundary of the object. The advantage of this is that bodies of almost arbitrary shapes can be added without grid restructuring, a procedure which is often time-consuming and computationally expensive. It has enabled us to perform complex and detailed 3D unsteady blood flow and blood-leaflets interaction in a mechanical heart valve (MHV) under physiological conditions. ", "keywords": "fluid-structure interaction;immersed object method;overlapping grids;unstructured parallel-multigrid computation;matrix-free implicit method;3d unsteady incompressible flows;mechanical heart valves", "title": "Numerical simulation of 3D fluid-structure interaction flow using an immersed object method with overlapping grids"} {"abstract": "We consider the joint pricing and inventory control problem for a single product over a finite horizon and with periodic review. The demand distribution in each period is determined by an exogenous Markov chain. Pricing and ordering decisions are made at the beginning of each period and all shortages are backlogged. The surplus costs as well as fixed and variable costs are state dependent. We show the existence of an optimal (s,S,p)-type feedback policy for the additive demand model. We extend the model to the case of emergency orders. We compute the optimal policy for a class of Markovian demand and illustrate the benefits of dynamic pricing over fixed pricing through numerical examples. The results indicate that it is more beneficial to implement dynamic pricing in a Markovian demand environment with a high fixed ordering cost or with high demand variability.", "keywords": "joint pricing and inventory control;markovian demand;optimal feedback policy", "title": "Joint pricing and inventory control with a Markovian demand model"} {"abstract": "Machine learning techniques are widely used in negotiation systems. To get more accurate and satisfactory learning results, negotiation parties have the desire to employ learning techniques on the union of their past negotiation records. However, negotiation records are usually confidential and private, and owners may not want to reveal the details of these records. In this paper, we introduce a privacy preserving negotiation learning scheme that incorporate secure multiparty computation techniques into negotiation learning algorithms to allow negotiation parties to securely complete the learning process on a union of distributed data sets. As an example, a detailed solution for secure negotiation Q-learning is presented based on two secure multiparty computations: weighted mean and maximum. We also introduce a novel protocol for the secure maximum operation.", "keywords": "negotiation;secure maximum;privacy;q-learning", "title": "privacy preserving learning in negotiation"} {"abstract": "Rapid prediction tools for reservoir over-year and within-year capacities that dispense with the sequential analysis of time-series runoff data are developed using multiple linear regression and multi-layer perceptron, artificial neural networks (MLP-ANNs). Linear regression was used to model the total (i.e. within-year+over-year) capacity using the over-year capacity as one of the inputs, while the ANNs were used to simultaneously model directly the over-year and total capacities. The inputs used for the ANNs were basic runoff and systems variables such as the coefficient of variation (Cv) of annual and monthly runoff, minimum monthly runoff, the demand ratio and reservoir reliability. The results showed that all the models performed well during their development and when they were tested with independent data sets. Both models offer faster prediction tools for reservoir capacity at gauged sites when compared with behaviour simulation. Additionally, when the predictor variables can be evaluated at un-gauged sites using e.g. catchment characteristics, they make capacity estimation at such un-gauged sites a feasible proposition.", "keywords": "artificial neural networks;storageyieldreliability;sequent-peak algorithm;over-year capacity;within-year capacity;multiple regression;un-gauged sites", "title": "The relative utility of regression and artificial neural networks models for rapidly predicting the capacity of water supply reservoirs"} {"abstract": "We analyze the MAC access delay of the IEEE 802.11e enhanced distributed channel access (EDCA) mechanism under saturation. We develop a detailed analytical model to evaluate the influence of all EDCA differentiation parameters, namely AIFS, CWmin, CWmax, and TXOP limit, as well as the backoff multiplier beta. Explicit expressions for the mean, standard deviation, and generating function of the access delay distribution are derived. By applying numerical inversion on the generating function, we are able to efficiently compute values of the distribution. Comparison with simulation confirms the accuracy of our analytical model over a wide range of operating conditions. We derive simple asymptotics and approximations for the mean and standard deviation of the access delay, which reveal the salient model parameters for performance under different differentiation mechanisms. We also use the model to numerically study the differentiation performance and find that beta differentiation, though rejected during the standardization process, is an effective differentiation mechanism that has some advantages over the other mechanisms.", "keywords": "medium access delay;ieee 802.11e;qos;edca;service differentiation;generating function", "title": "An Access Delay Model for IEEE 802.11e EDCA"} {"abstract": "The objective of this study was to test whether information presented on slides during presentations is retained at the expense of information presented only orally, and to investigate part of the conditions under which this effect occurs, and how it can be avoided. Such an effect could be expected and explained either as a kind of redundancy effect due to excessive cognitive load caused by simultaneous presentation of oral and written information, or as a consequence of dysfunctional allocation of attention at the expense of oral information occurring in learners with a high subjective importance of slides. The hypothesized effect and these potential explanations were tested in an experimental study. In courses about literature search and access, 209 university students received a presentation accompanied either by no slides or by regular or concise PowerPoint slides. The retention of information presented orally and of information presented orally and on slides was measured separately in each condition and standardized for comparability. Cognitive load and subjective importance of slides were also measured. The results indicate a \"speech suppression effect\" of regular slides at the expense of oral information (within and across conditions), which cannot be explained by cognitive overload but rather by dysfunctional allocation of attention, and can be avoided by concise slides. It is concluded that theoretical approaches should account for the allocation of attention below the threshold of cognitive overload and its role for learning, and that a culture of presentations with concise slides should be established. ", "keywords": "improving classroom teaching;media in education;post-secondary education;teaching/learning strategies", "title": "Slide presentations as speech suppressors: When and why learners miss oral information"} {"abstract": "A parallel electrostatic Poisson's equation solver coupled with parallel adaptive mesh refinement (PAMR) is developed in this paper. The three-dimensional Poisson's equation is discretized using the Galerkin finite element method using a tetrahedral mesh. The resulting matrix equation is then solved through the parallel conjugate gradient method using the non-overlapping subdomain-by-subdomain scheme. A PAMR module is coupled with this parallel Poisson's equation solver to adaptively refine the mesh where the variation of potentials is large. The parallel performance of the parallel Poisson's equation is studied by simulating the potential distribution of a CNT-based triode-type field emitter. Results with ?100?000 nodes show that a parallel efficiency of 84.2% is achieved in 32 processors of a PC-cluster system. The field emission properties of a single CNT triode- and tetrode-type field emitter in a periodic cell are computed to demonstrate their potential application in field emission prediction.", "keywords": "parallel poisson's equation;galerkin finite element method;parallel adaptive mesh refinement;field emission", "title": "Development of a parallel Poisson's equation solver with adaptive mesh refinement and its application in field emission prediction"} {"abstract": "We present a numerical study of the spinodal decomposition of a binary fluid undergoing shear flow using the advective CahnHilliard equation, a stiff, nonlinear, parabolic equation characterized by the presence of fourth-order spatial derivatives. Our numerical solution procedure is based on isogeometric analysis, an approximation technique for which basis functions of high-order continuity are employed. These basis functions allow us to directly discretize the advective CahnHilliard equation without resorting to a mixed formulation. We present steady state solutions for rectangular domains in two-dimensions and, for the first time, in three-dimensions. We also present steady state solutions for the two-dimensional TaylorCouette cell. To enforce periodic boundary conditions in this curved domain, we derive and utilize a new periodic Bzier extraction operator. We present an extensive numerical study showing the effects of shear rate, surface tension, and the geometry of the domain on the phase evolution of the binary fluid. Theoretical and experimental results are compared with our simulations.", "keywords": "cahnhilliard equation;spinodal decomposition;shear flow;steady state;isogeometric analysis;bzier extraction", "title": "Isogeometric analysis of the advective CahnHilliard equation: Spinodal decomposition under shear flow"} {"abstract": "This paper presents aspects of a compiler for a new hardware description language (VHDL) written using attribute grammar techniques. VHDL is introduced, along with the new compiler challenges brought by a language that extends an Ada subset for the purpose of describing hardware. Attribute grammar programming solutions are presented for some of the language challenges. The organization of the compiler and of the target virtual machine represented by the simulation kernel are discussed, and performance and code-size figures are presented. The paper concludes that attribute grammars can be used for large commercial compilers with excellent results in terms of rapid development time and enhanced maintainability, and without paying any substantial penalty in terms of either the complexity of the language that can be handled or the resulting compilation speed.", "keywords": "challenge;program;organization;aspect;simulation;hardware description language;methodology;developer;maintainability;size;code;language;performance;compilation;timing;complexity;hardware;paper;virtual machine;attribute grammars", "title": "a vhdl compiler based on attribute grammar methodology"} {"abstract": "In this paper we present the telemedical environment based on VMDs implemented with Java mobile agent technology, called aglets. The agent based VMD implementation provides ad-hoc agent interaction, support for mobile agents and different user interface components in the telemedical system. We have developed a VMD agent framework with four types of agents: data agents, processing agents, presentation agents, and monitoring agents. Data agents abstract data source, creating uniform view on different types of data, independent of data acquisition device. Processing agents produce derived data, such us FFT power spectrum, from raw data provided by the data agents. Presentation agents supply user interface components using a variety of user data views. User interface components are based on HTTP, SMS and WAP protocols. Monitoring agents collaborate with data and processing agents providing support for data mining operations, and search for relevant patterns. Typical example is monitoring for possible epileptic attacks. We have applied VMDs to facilitate distributed EEG analysis. We have found that the flexibility of distributed agent architecture is well suited for the telemedical application domain. This flexibility is particularly important in the case of an emergency, enabling swift system reconfiguration on the fly.", "keywords": "distributed systems;telemedicine;software agents", "title": "an agent based framework for virtual medical devices"} {"abstract": "A new point collocation algorithm named Finite Block Method (FBM), which is based on the one-dimensional differential matrix is developed for 2D and 3D elasticity problems in this paper. The main idea is to construct the first order one-dimensional differential matrix for one block by using Lagrange series with uniformly distributed nodes. The higher order derivative matrix for one-dimensional problem is obtained. By introducing the mapping technique, a block of quadratic type is transformed from Cartesian coordinate(xyz) ( x y z ) to normalised coordinate () () with 8 seeds or 20 seeds for two or three dimensions. The differential matrices in physical domain are determined from that in the normalised transformed system. Several 2D and 3D examples are given and comparisons have been made with either analytical solutions or the boundary element method to demonstrate the accuracy and convergence of this method.", "keywords": "finite block method;1d mapping differential matrix;lagrange series expansion;elasticity;functionally graded media;anisotropy", "title": "Finite Block Method in elasticity"} {"abstract": "Semistatic byte-oriented word-based compression codes have been shown to be an attractive alternative to compress natural language text databases, because of the combination of speed, effectiveness, and direct searchability they offer. In particular, our recently proposed family of dense compression codes has been shown to be superior to the more traditional byte-oriented word-based Huffman codes in most aspects. In this paper, we focus on the problem of transmitting texts among peers that do not share the vocabulary. This is the typical scenario for adaptive compression methods. We design adaptive variants of our semistatic dense codes, showing that they are much simpler and faster than dynamic Huffman codes and reach almost the same compression effectiveness. We show that our variants have a very compelling trade-off between compression/decompression speed, compression ratio, and search speed compared with most of the state-of-the-art general compressors. ", "keywords": "text databases;natural language text compression;dynamic compression;searching compressed text", "title": "New adaptive compressors for natural language text"} {"abstract": "The purpose of this introduction is to provide a brief overview of the articles in this special issue and also a framework for understanding, designing and evaluating strategies for co-operative learning in the workplace and in educational environments. The special edition is divided into two partsIssue 1: Computer Supported Collaborative Learning in Formal Education, and Issue 2: Computer Supported Team and Organisational Learning in Workplaces. In general, Issue 1 focuses on collaborative learning in primary and secondary schools and in the University setting. Issue 2 is meant to focus on learning in complex and often highly stressful work situations which mostly require intensive communication in groups or teams and in each case allow for learning in the wider organisation. This introduction outlines a set of themes that can be found in the following papers and traces briefly how each paper fits within each discussion.", "keywords": "computer supported collaborative learning;schools;workplace learning;formal learning;activity theory;technology;organisations", "title": "Organisational computer supported collaborative learning: the affect of context"} {"abstract": "The proportion ratio (PR) of patient response is one of the most commonly used indices for measuring the relative treatment effect in a randomized clinical trial (RCT). Assuming a random effect multiplicative risk model, we develop two point estimators and three interval estimators in closed forms for the PR under a simple crossover RCT. On the basis of Monte Carlo simulation, we evaluate the performance of these estimators in a variety of situations. We note that the point estimator using a ratio of two arithmetic means of patient response probabilities over the two groups (distinguished by the order of treatment-received sequences) is generally preferable to the corresponding one using a ratio of two geometric means of patient response probabilities. We note that the three interval estimators developed in this paper can actually perform well with respect to the coverage probability when the number of patients per group is moderate or large. We further note that the interval estimator based on the ratio of two arithmetic means of patient response probabilities with the logarithmic transformation is probably the best among the three interval estimators discussed here. We use a simple crossover trial studying the suitability of two new inhalation devices for patients who were using a standard inhaler device delivering Salbutamol published elsewhere to illustrate the use of these estimators.", "keywords": "binary data;crossover trial;proportion ratio;bias;precision;coverage probability;interval estimator;point estimation", "title": "Estimation of the proportion ratio under a simple crossover trial"} {"abstract": "Identifying common patterns among area cladograms that arise in historical biogeography is an important tool for biogeographical inference. We develop the first rigorous formalization of these pattern-identification problems. We develop metrics to compare area cladograms. We define the maximum agreement area cladogram (MAAC) and we develop efficient algorithms for finding the MAAC of two area cladograms, while showing that it is NP-hard to find the MAAC of several binary area cladograms. We also describe a linear-time algorithm to identify if two area cladograms are identical.", "keywords": "biogeography;area cladograms;distance metrics;maximum agreement area cladogram;maximum agreement subset", "title": "Pattern identification in biogeography"} {"abstract": "High-performance clusters have been widely deployed to solve challenging and rigorous scientific and engineering tasks. On one hand, high performance is certainly an important consideration in designing clusters to run parallel applications. On the other hand, the ever increasing energy cost requires us to effectively conserve energy in clusters. To achieve the goal of optimizing both performance and energy efficiency in clusters, in this paper, we propose two energy-efficient duplication-based scheduling algorithms-Energy-Aware Duplication (EAD) scheduling and Performance-Energy Balanced Duplication (PEBD) scheduling. Existing duplication-based scheduling algorithms replicate all possible tasks to shorten schedule length without reducing energy consumption caused by duplication. Our algorithms, in contrast, strive to balance schedule lengths and energy savings by judiciously replicating predecessors of a task if the duplication can aid in performance without degrading energy efficiency. To illustrate the effectiveness of EAD and PEBD, we compare them with a nonduplication algorithm, a traditional duplication-based algorithm, and the dynamic voltage scaling (DVS) algorithm. Extensive experimental results using both synthetic benchmarks and real-world applications demonstrate that our algorithms can effectively save energy with marginal performance degradation.", "keywords": "homogeneous clusters;energy-aware scheduling;duplication algorithms", "title": "EAD and PEBD: Two Energy-Aware Duplication Scheduling Algorithms for Parallel Tasks on Homogeneous Clusters"} {"abstract": "In this paper, we introduce an advanced architecture of K-means clustering-based polynomial Radial Basis Function Neural Networks (p-RBF NNs) designed with the aid of Particle Swarm Optimization (PSO) and Differential Evolution (DE) and develop a comprehensive design methodology supporting their construction. The architecture of the p-RBF NNs comes as a result of a synergistic usage of the evolutionary optimization-driven hybrid tools. The connections (weights) of the proposed p-RBF NNs being of a certain functional character and are realized by considering four types of polynomials. In order to design the optimized p-RBF NNs, a prototype (center value) of each receptive field is determined by running the K-means clustering algorithm and then a prototype and a spread of the corresponding receptive field are further optimized through running Particle Swarm Optimization (PSO) and Differential Evolution (DE). The Weighted Least Square Estimation (WLSE) is used to estimate the coefficients of the polynomials (which serve as functional connections of the network). The performance of the proposed model and the comparative analysis involving models designed with the aid of PSO and DE are presented in case of a nonlinear function and two Machine Learning (ML) datasets", "keywords": "polynomial radial basis function neural networks ;k-means clustering;particle swarm optimization;differential evolution algorithm;weighted least square estimation .", "title": "Design of K-means clustering-based polynomial radial basis function neural networks (pRBF NNs) realized with the aid of particle swarm optimization and differential evolution"} {"abstract": "Prototypes of interactive computer systems have been built that can begin to detect and label aspects of human emotional expression, and that respond to users experiencing frustration and other negative emotions with emotionally supportive interactions, demonstrating components of human skills such as active listening, empathy, and sympathy. These working systems support the prediction that a computer can begin to undo some of the negative feelings it causes by helping a user manage his or her emotional state. This paper clarifies the philosophy of this new approach to human computer interaction: deliberately recognising and responding to an individual user's emotions in ways, that help users meet their needs. We define user needs in a broader perspective than has been hitherto discussed in the HCI community, to include emotional and social needs, and examine technology's emerging capability to address and support such needs. We raise and discuss potential concerns and objections regarding this technology, and describe several opportunities for future work. ", "keywords": "user emotions;affective computing;social interface;frustration;human-centred designs;empathetic interface;emotional needs", "title": "Computers that recognise and respond to user emotion: theoretical and practical implications"} {"abstract": "Large-scale data mining is often aided with graphic visualizations to facilitate a better understanding of the data and results. This is especially true for visual data and highly detailed data too complex to be easily understood in raw forms. In this work, we present several of our recent interdisciplinary works in data mining solar image repositories and discuss the over-arching need for effective visualizations of data, metadata, and results along the way. First, we explain the complex characteristics and overwhelming abundance of image data being produced by NASAs Solar Dynamics Observatory (SDO). Then we discuss the wide scope of solar data mining and highlight visual results from work in data labeling, classification, and clustering. Lastly, we present an overview of the first-ever Content-Based Image Retrieval (CBIR) system for solar images, and conclude with a brief look at the direction of our future research.", "keywords": "solar images;visualization;data mining;cbir", "title": "On visualization techniques for solar data mining"} {"abstract": "The imperfect nature of context in Ambient Intelligence environments and the special characteristics of the entities that possess and share the available context information render contextual reasoning a very challenging task. The accomplishment of this task requires formal models that handle the involved entities as autonomous logic-based agents and provide methods for handling the imperfect and distributed nature of context. This paper proposes a solution based on the Multi-Context Systems paradigm in which local context knowledge of ambient agents is encoded in rule theories (contexts), and information flow between agents is achieved through mapping rules that associate concepts used by different contexts. To handle imperfect context, we extend Multi-Context Systems with nonmonotonic features: local defeasible theories, defeasible mapping rules, and a preference ordering on the system contexts. On top of this model, we have developed an argumentation framework that exploits context and preference information to resolve potential conflicts caused by the interaction of ambient agents through the mappings, and a distributed algorithm for query evaluation.", "keywords": "ambient intelligence;contextual reasoning;defeasible reasoning;argumentation systems", "title": "Defeasible Contextual Reasoning with Arguments in Ambient Intelligence"} {"abstract": "Positive definite kernels on probability measures have been recently applied to classification problems involving text, images, and other types of structured data. Some of these kernels are related to classic information theoretic quantities, such as (Shannon's) mutual information and the Jensen-Shannon (JS) divergence. Meanwhile, there have been recent advances in nonextensive generalizations of Shannon's information theory. This paper bridges these two trends by introducing nonextensive information theoretic kernels on probability measures, based on new JS-type divergences. These new divergences result from extending the the two building blocks of the classical JS divergence: convexity and Shannon's entropy. The notion of convexity is extended to the wider concept of q-convexity, for which we prove a Jensen q-inequality. Based on this inequality, we introduce Jensen-Tsallis (JT) q-differences, a nonextensive generalization of the JS divergence, and define a k-th order JT q-difference between stochastic processes. We then define a new family of nonextensive mutual information kernels, which allow weights to be assigned to their arguments, and which includes the Boolean, JS, and linear kernels as particular cases. Nonextensive string kernels are also defined that generalize the p-spectrum kernel. We illustrate the performance of these kernels on text categorization tasks, in which documents are modeled both as bags of words and as sequences of characters.", "keywords": "positive definite kernels;nonextensive information theory;tsallis entropy;jensen-shannon divergence;string kernels", "title": "Nonextensive Information Theoretic Kernels on Measures"} {"abstract": "In this paper we investigate the efficiency of cryptosystems based on ordinary elliptic curves over fields of characteristic three. We look at different representations for curves and consider some of the algorithms necessary to perform efficient point multiplication. We give example timings for our operations and compare them with timings for curves in characteristic two of a similar level of security. We show that using the Hessian form in characteristic three produces a point multiplication algorithm under 50 percent slower than the equivalent system in characteristic two. Thus it is conceivable that curves in characteristic three, could offer greater performance than currently perceived by the community.", "keywords": "elliptic curve cryptography;hessian form;characteristic three", "title": "Point multiplication on ordinary elliptic curves over fields of characteristic three"} {"abstract": "We consider a three-phase inverse Stefan problem. Such a problem consists in a reconstruction of the function describing the coefficient of heat-transfer, when the positions of the moving solid and liquid interfaces are well-known. We introduce three partial problems for each phase (liquid, solid and mushy) separately. The solutions of these problems are used for the determination of the unknown heat-transfer coefficient. The missing data for a mushy (solid) phase are computed from over-determined data at the moving liquid (solid) interface taking into account the transmission condition. At the end we present numerical calculations in one dimension using piecewise linear continuous finite elements in order to demonstrate the efficiency of the designed numerical algorithm.", "keywords": "three-phase inverse stefan problem;recovery of the heat-transfer coefficient", "title": "Determination of the heat-transfer coefficient during solidification of alloys"} {"abstract": "Firstly, this paper analyzes the basic principles and processes of the spatial pattern changes of land use in towns and villages, and the result shows that the land resource demands of urban development and population growth lead to the spatial pattern changes. Secondly, in order to grip land use changes better, the paper proposes a method for the simulation of spatial patterns. The simulating method can be divided into two parts: one is a quantitative forecast by using the Markov model, and the other is simulating the spatial pattern changes by using the CA model. The above two models construct the simulative model of the spatial pattern of land use in towns and villages. Finally, selecting Fangshan which is a district of Beijing as the experimental area, both the quantity and spatial pattern changing characteristics are investigated through building a changing dataset of land use by using spatial analysis methods based on the land use data in 2001, 2006 and 2008; CA-Markov is used to simulate the spatial pattern of land use in Fangshan for 2015. ", "keywords": "land use change;spatial pattern;markov;cellular automata;fangshan district in beijing", "title": "Simulation of land use spatial pattern of towns and villages based on CA-Markov model"} {"abstract": "FUNET has been operating a public, globally-used 6to4 (RFC 3056) relay router since November 2001. The traffic has been logged and is now analyzed to gather information of 6to4 and IPv6 deployment. Among other figures, we note that the number of 6to4 capable nodes has increased by an order of magnitude in half a year: in April 2004, there are records of about 2 million different 6to4 nodes using this particular relay. Vast majority of this is just testing the availability of the relay, done by the Microsoft Windows systems, but the real traffic has also increased over time. While the observed 6to4 traffic has typically consisted of relatively simple system-level applications, or applications by power users, the emergence of peer-to-peer applications such as BitTorrent was also observed.", "keywords": "ipv6;6to4;ipv6 transition", "title": "Observations of IPv6 traffic on a 6to4 relay"} {"abstract": "This work deals with the modelling and control of a riderless bicycle rolling on a moving plane. It is assumed here that the bicycle is controlled by a pedalling torque, a directional torque and by a rotor mounted on the crossbar that generates a tilting torque. In particular, a kinematic model of the bicycles motion is derived by using its dynamic model. Then, using this kinematic model, the expressions for the applied torques are obtained.", "keywords": "riderless bicycle;moving plane;circular rotor plate;nonholonomic constraints;inverse dynamics control;stabilization", "title": "Modelling and control of the motion of a riderless bicycle rolling on a moving plane"} {"abstract": "Simplified models are needed for performing large-scale network simulations involving thousands of cells. Ideally, these models should be as simple as possible, but still capture important electrotonic properties, such as voltage attenuation. Here, we propose a method to design simplified models with correct voltage attenuation, based on camera-lucida reconstructions of neurons. The simplified model geometry is fit to the detailed model such that it preserves: (i) total membrane area, (ii) input resistance, (iii) time constant and (iv) voltage attenuation for current injection in the soma. Using the three dimensional reconstruction of a layer VI pyramidal cell, we show that this procedure leads to an efficient simplified model which preserves voltage attenuation for somatic current injection as well as for distributed synaptic inputs in dendrites. Attenuation was also correctly captured in the presence of synaptic background activity. These simplified models should be useful for performing network simulations of neurons with electrotonic properties consistent with detailed morphologies.", "keywords": "cerebral cortex;dendritic integration;synaptic background activity;computational models;network", "title": "Simplified models of neocortical pyramidal cells preserving somatodendritic voltage attenuation"} {"abstract": "We present a new manipulation system for hybrid force assisted assembly. We propose a hybrid force assisted assembly process for 3-D helical nanobelts. Helical nanobelt tweezer and sensing probe have been assembled using the proposed method.", "keywords": "hybrid force-assisted assembly;ultra-flexible nanostructures;thin-film nanostructures;3-d helical nanobelts", "title": "Hybrid force-assisted 3-D assembly of helical nanobelts"} {"abstract": "In this paper, we develop a vision-based inspection system for roundness measurements. A stochastic optimization approach has been proposed to compute the reference circles of MIC (maximum inscribing circle), MCC (minimum circumscribing circle) and MZC (minimum zone circle) methods. The proposed algorithm is a hybrid optimization method based on simulated annealing and HookeJeeves pattern search. From the experimental results, it is noted that the algorithm can solve the roundness assessment problems effectively and efficiently. The developed vision-based inspection system can be an on-line tool for the measurement of circular components.", "keywords": "measurement;roundness error;simulated annealing;pattern search", "title": "A stochastic optimization approach for roundness measurements"} {"abstract": "In this paper, an asymptotic expansion is constructed to solve second-order differential equation systems with highly oscillatory forcing terms involving multiple frequencies. An asymptotic expansion is derived in inverse of powers of the oscillatory parameter and its truncation results in a very effective method of dicretizing the differential equation system in question. Numerical experiments illustrate the effectiveness of the asymptotic method in contrast to the standard RungeKutta method.", "keywords": "highly oscillatory problems;second-order differential equations;modulated fourier expansions;multiple frequencies;numerical analysis;", "title": "Asymptotic solvers for second-order differential equation systems with multiple frequencies"} {"abstract": "This paper proposes to extend the discrete Verhulst power equilibrium approach, previously suggested in [1], to the power-rate optimal allocation problem. Multirate users associated to different types of traffic are aggregated to distinct user classes, with the assurance of minimum rate allocation per user and QoS. Herein, Verhulst power allocation algorithm of [1] was adapted to the DS-CDMA jointly power-rate control problem. The analysis was carried out taking into account static and dynamic channels, as well as the convergence time (number of iterations), quality of solution, in terms of the normalized mean squared error (NSE), when compared to the analytical solution based on interference matrix inversion, and the solution given by classical Foschini algorithm [2] as well, besides the computational complexity analysis.", "keywords": "resource allocation;power-rate control;siso multirate ds-cdma;discrete verhulst equilibrium equation;qos", "title": "Power-Rate Allocation in DS-CDMA Systems Based on Discretized Verhulst Equilibrium"} {"abstract": "Transduction is an inference mechanism adopted from several classification algorithms capable of exploiting both labeled and unlabeled data and making the prediction for the given set of unlabeled data only. Several transductive learning methods have been proposed in the literature to learn transductive classifiers from examples represented as rows of a classical double-entry table (or relational table). In this work we consider the case of examples represented as a set of multiple tables of a relational database and we propose a new relational classification algorithm, named TRANSC, that works in a transductive setting and employs a probabilistic approach to classification. Knowledge on the data model, i.e., foreign keys, is used to guide the search process. The transductive learning strategy iterates on a k-NN based re-classification of labeled and unlabeled examples, in order to identify borderline examples, and uses the relational probabilistic classifier Mr-SBC to bootstrap the transductive algorithm. Experimental results confirm that TRANSC outperforms its inductive counterpart (Mr-SBC).", "keywords": "multi-relational data mining;transductive classification;relational probabilistic classification;relational learning;transduction", "title": "A relational approach to probabilistic classification in a transductive setting"} {"abstract": "Preterm birth is the leading cause of perinatal morbidity and mortality, but a precise mechanism is still unknown. Hence, the goal of this study is to explore the risk factors of preterm using data mining with neural network and decision tree C5.0. The original medical data were collected from a prospective pregnancy cohort by a professional research group in National Taiwan University. Using the nest case-control study design, a total of 910 motherchild dyads were recruited from 14,551 in the original data. Thousands of variables are examined in this data including basic characteristics, medical history, environment, and occupation factors of parents, and variables related to infants. The results indicate that multiple birth, hemorrhage during pregnancy, age, disease, previous preterm history, body weight before pregnancy and height of pregnant women, and paternal life style risk factors related to drinking and smoking are the important risk factors of preterm birth. Hence, the findings of our study will be useful for parents, medical staff, and public health workers in attempting to detect high risk pregnant women and provide intervention early to reduce and prevent preterm birth.", "keywords": "preterm birth;data mining;neural network;decision tree", "title": "Exploring the risk factors of preterm birth using data mining"} {"abstract": "Complex multi-processor systems-on-chip and distributed embedded systems exhibit a confusing variety of run time interdependencies. For reliable timing validation, not only application, but also architecture, scheduling and communication properties have to be considered. This is very different from functional validation, where architecture, scheduling and communication can be idealized. To avoid unknown corner-case coverage in simulation-based validation on one had, and the state-space explosion or over-simplification of unified formal performance models on the other, we take a compositional approach and combine different efficient models and methods for timing analysis of single processes, real-time operating system (RTOS) overhead, single processors and communication components, and finally multiple connected components. As a result, timing analysis of complex, heterogeneous embedded systems becomes feasible. on behalf of IMACS.", "keywords": "real-time embedded systems;performance verification;interval analysis", "title": "Interval-based analysis in embedded system design"} {"abstract": "Serializability is a commonly used correctness condition in concurrent programming. When a concurrent module is serializable, certain other properties of the module can be verified by considering only its sequential executions. In many cases, concurrent modules guarantee serializability by using standard locking protocols, such as tree locking or two-phase locking. Unfortunately, according to the existing literature, verifying that a concurrent module adheres to these protocols requires considering concurrent interleavings. In this paper, we show that adherence to a large class of locking protocols (including tree locking and two-phase locking) can be verified by considering only sequential executions. The main consequence of our results is that in many cases, the (manual or automatic) verification of serializability can itself be done using sequential reasoning .", "keywords": "concurrency;verification;serializability;reduction", "title": "sequential verification of serializability"} {"abstract": "Diversifying search results of queries seeking for different view points about controversial topics is key to improving satisfaction of users. The challenge for finding different opinions is how to maximize the number of discussed arguments without being biased against specific sentiments. This paper addresses the issue by first introducing a new model that represents the patterns occurring in documents about controversial topics. Second, proposing an opinion diversification model that uses (1) relevance of documents, (2) semantic diversification to capture different arguments and (3) sentiment diversification to identify positive, negative and neutral sentiments about the query topic. We have conducted our experiments using queries on various controversial topics and applied our diversification model on the set of documents returned by Google search engine. The results show that our model outperforms the native ranking of Web pages about controversial topics by a significant margin.", "keywords": "ranking;opinion diversification", "title": "diversifying search results of controversial queries"} {"abstract": "Lamarckian learning has been introduced into evolutionary computation to enhance the ability of local search. The relevant research topic, memetic computation, has received significant amount of interest. In this study, a novel memetic computational framework is proposed by simulating the integrated regulation between neural and immune systems. The Lamarckian learning strategy of simulating the unidirectional regulation of neural system on immune system is designed. Consequently, an immune memetic algorithm based on the Lamarckian learning is proposed for numerical optimization. The proposed algorithm combines the advantages of immune algorithms and mathematical programming, and performs well in both global and local search. The simulation results based on ten low-dimensional and ten high-dimensional benchmark problems show that the immune memetic algorithm outperforms the basic genetic algorithm-based memetic algorithm in solving most of the test problems.", "keywords": "memetic computation;artificial immune system;lamarckian learning;genetic algorithm;numerical optimization", "title": "Memetic computation based on regulation between neural and immune systems: the framework and a case study"} {"abstract": "This paper presents an approach to optimal design of elastic flywheels using an Injection Island Genetic Algorithm (iiGA), summarizing a sequence of results reported in earlier publications. An iiGA in combination with a structural finite element code is used to search for shape variations and material placement to optimize the Specific Energy Density (SED, rotational energy per unit weight) of elastic flywheels while controlling the failure angular velocity. iiGAs seek solutions simultaneously at different levels of refinement of the problem representation (and correspondingly different definitions of the fitness function) in separate subpopulations (islands). Solutions are sought first at low levels of refinement with an axi-symmetric plane stress finite element code for high-speed exploration of the coarse design space. Next, individuals are injected into populations with a higher level of resolution that use an axi-symmetric three-dimensional finite element code to \"fine-tune\" the structures. A greatly simplified design space (containing two million possible solutions) was enumerated for comparison with various approaches that include: simple GAs, threshold accepting (TA), iiGAs and hybrid iiGAs. For all approaches compared for this simplified problem, all variations of the iiGA were found to be the most efficient. This paper will summarize results obtained studying a constrained optimization problem with a huge design space approached with parallel GAs that had various topological structures and several different types of iiGA, to compare efficiency. For this problem, all variations of the iiGA were found to be extremely efficient in terms of computational time required to final solution of similar fitness when compared to the parallel GAs.", "keywords": "optimization;automated design;flywheel;genetic algorithm and fem", "title": "Optimal design of flywheels using an injection island genetic algorithm"} {"abstract": "Data from an international sample of 312 hotels (from UK and Spain) is analyzed. The process through which CRM technology translates into organizational performance is described. A CRM technology, when properly implemented, shows a positive effect on performance. Knowledge management and organizational commitment acted as relevant mediators. Organizational commitment probed to be the main determinant of CRM success.", "keywords": "customer relationship management ;crm success;crm technology infrastructure;organizational commitment;knowledge management", "title": "Paving the way for CRM success: The mediating role of knowledge management and organizational commitment"} {"abstract": "We present SUSY_FLAVOR version 2 a Fortran 77 program that calculates low-energy flavor observables in the general R-parity conserving MSSM. For a set of MSSM parameters as input, the code gives predictions for: Electric dipole moments of the leptons and the neutron. Anomalous magnetic moments (i.e.g?2 g ? 2 ) of the leptons. Radiative lepton decays (ee ? and,e, e ? ). Rare Kaon decays ( K L 00and K ++). Leptonic B decays (Bs,d?l+l? B s , d ? l + l ? , BBand B?DB ? D). Radiative B decays ( B ? X ? s ? ). ?F=2 ? F = 2 processes ( K ? 0 K0 K 0 , D ? D D , B ? d Bd B d and B ? s Bs B s mixing). Program title:SUSY_FLAVORv2 Catalogue identifier: AEGV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGV_v2_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 15683 No. of bytes in distributed program, including test data, etc.: 89130 Distribution format: tar.gz Programming language: Fortran 77. Computer: Any. Operating system: Any, tested on Linux. Classification: 11.6. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: AEGV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 2180 Nature of problem: Predicting CP-violating observables, meson mixing parameters and branching ratios for set of rare processes in the general R-parity conserving MSSM. Solution method: We use standard quantum theoretical methods to calculate Wilson coefficients in MSSM and at one loop including QCD corrections at higher orders when this is necessary and possible. The input parameters can be read from an external file in SLHA format. Reasons for new version: A major rewrite of the internal code structure to accommodate higher order corrections; new observables added. Summary of revisions: SUSY_FLAVOR v2.0 is able to perform resummation of chirally enhanced corrections to all orders of perturbation expansion (v1.0 included 1-loop terms only). Routines calculating new observables are added: g-2 lepton magnetic moment anomaly,to e? e ? andto e?,e ? ,decays, B B to DDdecays, B B to ?e,?e,e , ? e ,decays. Parameter initialization in the sfermion sector is simplified and follows, by default, the SLHA2 conventions. Running time: For a single parameter set approximately 1s in double precision on a PowerBook Mac G4.", "keywords": "rare decays;flavor and cp violation;supersymmetry;general mssm;higher order resummation;fortran 77 code", "title": "SUSY_FLAVORv2: A computational tool for FCNC and CP-violating processes in the MSSM"} {"abstract": "Iris recognition is a biometric technology which shows a very high level of recognition accuracy, but low resolution (LR) iris images cause the degradation of the recognition performance. Therefore, a zoom lens with a long focal length is used in an iris camera. However, a bulky and costly zoom lens whose focal length is longer than 150 mm is required for capturing the iris image at a distance, which can increase the size and cost of the system. In order to overcome this problem, we propose a new super-resolution method which restores a single LR iris image into a high resolution (HR) iris image. Our research is novel in the following three ways compared to previous works. First, in order to prevent the loss of the middle and high frequency components of the iris patterns in the original image, the LR iris image is up-sampled using multiple multi-layered perceptrons (MLPs). Second, a point spread function (PSF) and a constrained least square (CLS) filter are used to remove sensor blurring in the up-sampled image. Third, the optimal parameters of the CLS filter and PSF in terms of the recognition accuracy are determined according to the zoom factor of the LR image. The experimental results show that the accuracy of iris recognition with the HR images restored by the proposed method is much enhanced compared to the three previous methods.", "keywords": "super-resolution restoration;mlp;iris recognition;cls filter", "title": "Super-Resolution Iris Image Restoration Based on Multiple MLPs and CLS Filter"} {"abstract": "Patterns are used in different disciplines as a way to record expert knowledge for problem solving in specific areas. Their systematic use in Software Engineering promotes quality, standardization, reusability and maintainability of software artefacts. The full realisation of their power is however hindered by the lack of a standard formalization of the notion of pattern. Our goal is to provide a language-independent formalization of the notion of pattern, so that it allows its application to different modelling languages and tools, as well as generic methods to enable pattern discovery, instantiation, composition, and conflict analysis. For this purpose, we present a new visual and formal, language-independent approach to the specification of patterns. The approach is formulated in a general way, based on graphs and category theory, and allows the specification of patterns in terms of (nested) variable submodels, constraints on their allowed variance, and inter-pattern synchronization across several diagrams (e.g. class and sequence diagrams for UML design patterns). We provide a formal notion of pattern satisfaction by models and propose mechanisms to suggest model transformations so that models become consistent with the patterns. We define methods for pattern composition, and conflict analysis. We illustrate our proposal on UML design patterns, and discuss its generality and applicability on different types of patterns, e.g. workflow patterns, enterprise integration patterns and interaction patterns. The approach has proven to be powerful enough to formalize patterns from different domains, providing methods to analyse conflicts and dependencies that usually are expressed only in textual form. Its language independence makes it suitable for integration in meta-modelling tools and for use in Model-Driven Engineering.", "keywords": "pattern formalization;pattern-based modelling;pattern composition;pattern conflicts", "title": "A language-independent and formal approach to pattern-based modelling with support for composition and analysis"} {"abstract": "This research investigates whether early mover advantage (EMA) exists among entrepreneurial e-tailers operating on third-party e-commerce platforms. Contrary to traditional wisdom, the current research hypothesizes that e-tailers may enjoy early mover advantages because of the consumer demand inertia amplified by the nature of the Internet and the system design characteristics of e-commerce platforms. We also argue that customer relationship management capabilities help enhance early mover advantages in an online setting. We employ panel data on 7309 e-tailers to perform analyses and find empirical evidence that strongly supports the abovementioned hypotheses.", "keywords": "e-tailer;e-commerce platform;early mover advantage;customer relationship management capability;market performance", "title": "Early mover advantage in e-commerce platforms with low entry barriers: The role of customer relationship management capabilities"} {"abstract": "This article describes the architecture and design of an IPTV network monitoring system and some of the use cases it enables. The system is based on distributed agents within IPTV terminal equipment (set-top box), which collect and send the data to a server where it is analyzed and visualized. In the article we explore how large amounts of collected data can be utilized for monitoring the quality of service and user experience in real time, as well as for discovering trends and anomalies over longer periods of time. Furthermore, the data can be enriched using external data sources, providing a deeper understanding of the system by discovering correlations with events outside of the monitored domain. Four supported use cases are described, among them using weather information for explaining away the IPTV quality degradation. The system has been successfully deployed and is in operation at the Slovenian IPTV provider Telekom Slovenije.", "keywords": "data visualization;network security;intrusion detection", "title": "Contextualized Monitoring and Root Cause Discovery in IPTV Systems Using Data Visualization"} {"abstract": "We consider a system consisting of N parallel servers, where jobs with different resource requirements arrive and are assigned to the servers for processing. Each server has a finite resource capacity and therefore can serve only a finite number of jobs at a time. We assume that different servers have different resource capacities. A job is accepted for processing only if the resource requested by the job is available at the server to which it is assigned. Otherwise, the job is discarded or blocked. We consider randomized schemes to assign jobs to servers with the aim of reducing the average blocking probability of jobs in the system. In particular, we consider a scheme that assigns an incoming job to the server having maximum available vacancy or unused resource among d randomly sampled servers. We consider the system in the limit where both the number of servers and the arrival rates of jobs are scaled by a large factor. This gives rise to a mean field analysis. We show that in the limiting system the servers behave independentlya property termed as propagation of chaos. Stationary tail probabilities of server occupancies are obtained from the stationary solution of the mean field which is shown to be unique and globally attractive. We further characterize the rate of decay of the stationary tail probabilities. Numerical results suggest that the proposed scheme significantly reduces the average blocking probability of jobs as compared to static schemes that probabilistically route jobs to servers independently of their states.", "keywords": "mean field;propagation of chaos;loss models;cloud computing;power-of-d d", "title": "Mean field and propagation of chaos in multi-class heterogeneous loss models"} {"abstract": "Selecting an instructive story from a video case base is an information retrieval problem, but standard indexing and retrieval techniques [1]were not developed with such applications in mind. The classical model assumes a passive retrieval system queried by interested and well-informed users. In educational situations, students cannot be expected to form appropriate queries or to identify their own ignorance. Systems that teach must, therefore, be active retrievers that formulate their own retrieval cues and reason about the appropriateness of intervention. The Story Producer for InteractivE Learning (SPIEL) is an active retrieval system for recalling stories to tell to students who are learning social skills in a simulated environment 2and3. SPIEL is a component of the Guided Social Simulation (GuSS) architecture [4]used to build YELLO, a program that teaches account executives the fine points of selling Yellow Pages advertising. SPIEL uses structured, conceptual indices derived from research in case-based reasoning 5and6. SPIEL's manually-created indices are detailed representations of what stories are about, and they are needed to make precise assessments of stories' relevance. SPIEL's opportunistic retrieval architecture operates in two phases. During the storage phase, the system uses its educational knowledge encapsulated in a library of storytelling strategies to determine, for each story, what an opportunity to tell that story would look like. During the retrieval phase, the system tries to recognize those opportunities while the student interacts with the simulation. This design is similar to opportunistic memory architectures proposed for opportunistic planning 7and8.", "keywords": "indexing;multimedia;intelligent tutoring systems;care-based reasoning", "title": "Conceptual indexing and active retrieval of video for interactive learning environments"} {"abstract": "We present challenges of using agile practices in traditional enterprise environments. We organize the challenges under two factors. For both factors, we identify successful mitigation strategies.", "keywords": "agile development;enterprise environment;grounded theory", "title": "When agile meets the enterprise"} {"abstract": "Among the various traditional approaches of pattern recognition, the statistical approach has been most intensively studied and used in practice. This paper presents a new classifier called MLAC for multiclass classification based on the learning automata. The proposed classifier using a soft decision method could find the optimal hyperplanes in solution space and separate available classes from each other well. We have tested the MLAC classifier on some multiclass datasets including IRIS, WINE and GLASS.1 The results show a significant improvement in comparison with the previous learning automata based classifiers as it has more accuracy and lower running time. Also, in order to evaluate performance of the proposed MLAC classifier, it has been compared with conventional classifiers such as K-Nearest Neighbor, Multilayer Perceptron, Genetic classifier and Particle Swarm classifier on these datasets in terms of accuracy. The obtained results show that the proposed MLAC classifier not only improves the classification's accuracy, but also reduces time complexity.", "keywords": "pattern classification;learning automata ;soft decision;multiclass classifier", "title": "Presenting a new multiclass classifier based on learning automata"} {"abstract": "ZigBee uses the network security and application profile layers of the IEEE 802.15.4 and ZigBee Alliance standards for reliable, low-powered, wireless data communications. However, the ZigBee has problems of being less secure, and has a difficulty in distributing shared symmetric keys between each pair of nodes. In addition, the ZigBee protocol is inadequate for large sensor networks, which may consist of several very large scale clusters. In this paper, we first construct a secure ZigBee scheme for realistic scenarios consisting of a large network with several clusters containing coordinators and numerous devices. We present a new key management protocol for ZigEee networks, which can be used among participants of different clusters and analyze its performance. ", "keywords": "ieee 802.15.4;zigbee cluster;key management;message complexity", "title": "Secured communication protocol for internetworking ZigBee cluster networks"} {"abstract": "The stable Galerkin formulation and a stabilized Galerkin least squares formulation for the Stokes problem are analyzed in the context of the hp-version of the finite element method. Theoretical results for both formulations establish exponential rates of convergence under realistic assumptions on the input data. We confirm these results by a series of numerical experiments on an L-shaped domain where the solution exhibits corner singularities.", "keywords": "hp-FEM;Stokes problem;Galerkin formulation;Galerkin least squares formulation", "title": "hp-finite element simulations for Stokes flow stable and stabilized"} {"abstract": "The hierarchical network structure was proposed in the early 80s and becomes popular nowadays. The routing complexity and the routing table size are the two primary performance measures in a dynamic route guidance system. Although various algorithms exist for finding the best routing policy in a hierarchical network, hardly exists any work in studying and evaluating the aforementioned measures for a hierarchical network. In this paper, a new mathematical framework to carry out the averages of the routing complexity and the routing table size is proposed to express the routing complexity and the routing table size as the functions of the hierarchical network parameters such as the number of the hierarchical levels and the subscriber density (cluster-population) for each hierarchical level.", "keywords": "joint optimization;hierarchical networks;routing;complexity", "title": "Joint Optimization of Complexity and Overhead for the Routing in Hierarchical Networks"} {"abstract": "In this research, we propose two different methods to solve the coupled KleinGordonZakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.", "keywords": "differential quadrature method;kleingordonzakharov equations;radial basis functions ;inverse multiquadric ;thin plate spline ", "title": "The solitary wave solution of coupled KleinGordonZakharov equations via two different numerical methods"} {"abstract": "Given the location of a relative maximum of the log-likelihood function, how to assess whether it is the global maximum? This paper investigates an existing statistical tool, which, based on asymptotic analysis, answers this question by posing it as a hypothesis testing problem. A general framework for constructing tests for global maximum is given. The characteristics of the tests are investigated for two cases: correctly specified model and model mismatch. A finite sample approximation to the power is given, which gives a tool for performance prediction and a measure for comparison between tests. The sensitivity of the tests to model mismatch is analyzed in terms of the Renyi divergence and the Kullback-Leibler divergence between the true underlying distribution and the assumed parametric class and tests that are insensitive to small deviations from the model are derived thereby overcoming a fundamental weakness of existing tests. The tests are illustrated for three applications: passive localization or direction finding using an array of sensors, estimating the parameters of a Gaussian mixture model, and estimation of superimposed exponentials in noise-problems that are known to suffer from local maxima.", "keywords": "array processing;gaussian mixtures;global optimization;local maxima;maximum likelihood ;parameter estimation;superimposed exponentials in noise", "title": "On tests for global maximum of the log-likelihood function"} {"abstract": "Non-reachability proofs in Timed Petrinets were usually done by proving the non-reachability within the underlying timeless net. However, in many cases this approach fails. In this paper, we present an approach to prove non-reach ability within the actual Timed Petrinet. For this purpose, we introduce a state equation for Timed Petrinets in analogy to timeless nets. Using this state equation, we can express reachability as a system of equations and inequations, which is solvable in polynomial time.", "keywords": "timed petrinet;duration net;state equation;non-reachability", "title": "Using state equation to prove non-reachability in timed petrinets"} {"abstract": "Abstract: Here, we report that leucine enkephalin (LE) is neuroprotective to dopaminergic (DA) neurons at femtomolar concentrations through anti-inflammatory properties. Mesencephalic neuron-glia cultures pretreated with femtomolar concentrations of LE (10?15-10?13 M) protected DA neurons from lipopolysaccharide (LPS)-induced DA neurotoxicity, as determined by DA uptake assay and tyrosine hydroxylase (TH) immunocytochemistry (ICC). However, des-tyrosine leucine enkephalin (DTLE), an LE analogue that is missing the tyrosine residue required for binding to the kappa opioid receptor, was also neuroprotective (10?15-10?13 M), as determined by DA uptake assay and TH ICC. Both LE and DTLE (10?15-10?13 M) reduced LPS-induced superoxide production from microglia-enriched cultures. Further, both LE and DTLE (10?14, 10?13 M) reduced the LPS-induced tumor necrosis factor-alpha (TNF?) mRNA and TNF? protein from PHOX+/+ microglia, as determined by quantitative real-time RT-PCR and ELISA analysis in mesencephalic neuron-glia cultures, respectively. However, both peptides failed to inhibit TNF? expression in PHOX?/? cultures, which are unable to produce extracellular superoxide in response to LPS. Additionally, LE and DTLE (10?14, 10?13 M) failed to show any neuroprotection against LPS in PHOX?/? cultures. Together, these data indicate that LE and DTLE are neuroprotective at femtomolar concentrations through the inhibition of oxidative insult associated with microglial NADPH oxidase and the attenuation of the ROS-mediated amplification of TNF? gene expression in microglia.", "keywords": "microglial nadph oxidase;leucine enkephalin ;des-tyrosine leucine enkephalin ;lipopolysaccharide ;tumor necrosis factor-alpha ;neuroprotection;neurotoxicity", "title": "Microglial NADPH Oxidase Mediates Leucine Enkephalin Dopaminergic Neuroprotection"} {"abstract": "Traditionally, the answer to a database query is construed as the set of all tuples that meet the criteria stated. Strict adherence to this notion in query evaluation is, however, increasingly unsatisfactory because decision makers are more prone to adopting an exploratory strategy for information search which we call \"getting some answers quickly, and perhaps more later.\" From a decision-maker's perspective, such a strategy is optimal for coping with information overload and makes economic sense (when used in conjunction with a micropayment mechanism). These new requirements present new opportunities for database query optimization. In this paper, we propose a progressive query processing strategy that exploits this behavior to conserve system resources and to minimize query response time and user waiting time. This is accomplished by the heuristic decomposition of user queries into subqueries that can be evaluated on demand. To illustrate the practicality of the proposed methods, we describe the architecture of a prototype system that provides a nonintrusive implementation of our approach. Finally, we present experimental results obtained from an empirical study conducted using an Oracle Server that demonstrate the benefits of the progressive query processing strategy.", "keywords": "www;internet;progressive query evaluation;query optimization;query rewrite", "title": "Query rewriting for SWIFT (First) answers"} {"abstract": "The classical result in the theory of random graphs, proved by Erd. os and Renyi in 1960, concerns the threshold for the appearance of the giant component in the random graph process. We consider a variant of this problem, with a Ramsey flavor. Now, each random edge that arrives in a sequence of rounds must be colored with one of r colors. The goal can be either to create a giant component in every color class, or alternatively, to avoid it in every color. One can analyze the offline or online setting for this problem. In this paper, we consider all these variants and provide nontrivial upper and lower bounds; in certain cases (like online avoidance) the obtained bounds are asymptotically tight. ", "keywords": "random graphs;giant component;ramsey game", "title": "Ramsey Games With Giants"} {"abstract": "An artificial neural network (ANN) based adaptive estimator is presented in this paper for the estimation of rotor speed in a sensorless vector-controlled induction motor (IM) drive. The model reference adaptive system (MRAS) is formed with instantaneous and steady state reactive power. Selection of reactive power as the functional candidate in MRAS automatically makes the system immune to the variation of stator resistance. Such adaptive system performs satisfactorily at very low speed. However, it is observed that an unstable region exists in the speed-torque domain during regeneration. In this work, ANN is applied to overcome such stability related problem. The proposed method is validated through computer simulation using MATLAB/SIMULINK. Sample results from a laboratory prototype (using dSPACE-1104) have confirmed the usefulness of the proposed estimator.", "keywords": "artificial neural network ;induction motor ;model reference adaptive system ;reactive power;sensorless;stability;vector control", "title": "An Adaptive Speed Sensorless Induction Motor Drive With Artificial Neural Network for Stability Enhancement"} {"abstract": "Obtaining accurate solutions for convectiondiffusion equations is challenging due to the presence of layers when convection dominates the diffusion. To solve this problem, we design an adaptive meshing algorithm which optimizes the alignment of anisotropic meshes with the numerical solution. Three main ingredients are used. First, the streamline upwind PetrovGalerkin method is used to produce a stabilized solution. Second, an adapted metric tensor is computed from the approximate solution. Third, optimized anisotropic meshes are generated from the computed metric tensor by an anisotropic centroidal Voronoi tessellation algorithm. Our algorithm is tested on a variety of two-dimensional examples and the results shows that the algorithm is robust in detecting layers and efficient in avoiding non-physical oscillations in the numerical approximation.", "keywords": "anisotropic mesh generation;metric tensor;convection-dominated problem;stabilized method", "title": "Adaptive anisotropic meshing for steady convection-dominated problems"} {"abstract": "We introduce an evolution-communication model for tissue P systems where communication rules are inspired by the general mechanism of cell communication based on signals and receptors: a multiset can enter a cell only in the presence of another multiset. Some basic variants of this model are also considered where communication is restricted either to be unidirectional or to use special multisets of objects called receptors. The universality for all these variants of tissue P systems is then proved by using two cells (three cells in the case of unidirectional communication) and rules of a minimal size.", "keywords": "membrane computing;turing computability;tissue", "title": "Cell communication in tissue P systems: Universality results"} {"abstract": "We present an algorithm for detecting and modeling rhythmic temporal patterns in the record of an individual's computer activity, or online \"presence.\" The model is both predictive and descriptive of temporal features and is constructed with minimal a priori knowledge.", "keywords": "awareness;rhythms;user modeling;cscw;statistics", "title": "activity rhythm detection and modeling"} {"abstract": "We consider a radio network consisting of n stations represented as the complete graph on a set of n points in the Euclidean plane with edge weights omega(p,q)=|pq| (delta) +C (p) , for some constant delta > 1 and nonnegative offset costs C (p) . Our goal is to find paths of minimal energy cost between any pair of points that do not use more than some given number k of hops. We present an exact algorithm for the important case when delta=2, which requires O(kn log n) time per query pair (p, q). For the case of an unrestricted number of hops we describe a family of algorithms with query time O(n(1+alpha)), where alpha > 0 can be chosen arbitrarily. If we relax the exactness requirement, we can find an approximate (1 + epsilon) solution in constant time by querying a data structure which has linear size and which can be build in O(n log n) time. The dependence on epsilon is polynomial in 1/epsilon. One tool we employ might be of independent interest: For any pair of points (p, q) epsilon (P x P) we can report in constant time the cluster pair (A, B) representing (p, q) in a well-separated pair decomposition of P.", "keywords": "computational geometry;communication networks", "title": "Energy-Efficient Paths in Radio Networks"} {"abstract": "We propose novel feature-extraction and classification methods for the automatic visual inspection of manufactured LEDs. The defects are located at the area of the p-electrodes and lead to a malfunction of the LED. Besides the complexity of the defects, low contrast and strong image noise make this problem very challenging. For the extraction of image characteristic we compute radially-encoded features that measure discontinuities along the p-electrode. Therefore, we propose two different methods: the first method divides the object into several radial segments for which mean and standard deviation are computed and the second method computes mean and standard deviation along different orientations. For both methods we combine the features over several segments or orientations by computing simple measures such as the ratio between maximum and mean or standard deviation. Since defect-free LEDs are frequent and defective LEDs are rare, we apply and evaluate different novelty-detection methods for classification. Therefore, we use a kernel density estimator, kernel principal component analysis, and a one-class support vector machine. We further compare our results to Pearson's correlation coefficient, which is evaluated using an artificial reference image. The combination of one-class support vector machine and radially-encoded segment features yields the best overall performance by far, with a false alarm rate of only 0.13% at a 100% defect detection rate, which means that every defect is detected and only very few defect-free p-electrodes are rejected. Our inspection system does not only show superior performance, but is also computationally efficient and can therefore be applied to further real-time applications, for example solder joint inspection. Moreover, we believe that novelty detection as used here can be applied to various expert-system applications. ", "keywords": "defect detection;novelty detection;light emitting diodes;feature extraction;one-class svm;kernel pca;kernel density estimation", "title": "Novelty detection for the inspection of light-emitting diodes"} {"abstract": "The resolution tree problem consists of deciding whether a given sequence-like resolution refutation admits a tree structure. This paper shows the NP-completeness of both the resolution tree problem and a natural generalization of the resolution tree problem that does not involve resolution. ", "keywords": "resolution;tree-like resolution;np-completeness", "title": "Finding a tree structure in a resolution proof is NP-complete"} {"abstract": "Interventional radiologists manipulate guidewires and catheters and steer stents through the patient's vascular system under X-ray imaging for treatment of vascular diseases. The complexity of these procedures makes training mandatory in order to master hand-eye coordination, instrument manipulation and procedure protocols for each radiologist. In this paper we present a simulator for interventional radiology, which deploys a model of guidewire/catheter based on the Cosserat theory applied to one-dimensional structures. This model starts from the energetic formulation of the flament considering the Hook laws of continuum mechanics. The Lagrange formulations are used to describe the model deformation. This model takes (self-) collisions into account and it is revealed to be very efficient for interactive applications. The simulation environment allows to carry out the most common procedures: guidewire and catheter navigation, contrast dye injection to visualize the vessels, balloon angioplasty and stent placement. Moreover, heartbeat as well as breathing are also simulated visually.", "keywords": "real-time simulation;cosserat rod theory;x-ray;interventional radiology;minimally invasive surgery", "title": "a real-time simulator for interventional radiology"} {"abstract": "This paper presents an approach for real-time video event recognition that combines the accuracy and descriptive capabilities of, respectively, probabilistic and semantic approaches. Based on a state-of-art knowledge representation, we define a methodology for building recognition strategies from event descriptions that consider the uncertainty of the low-level analysis. Then, we efficiently organize such strategies for performing the recognition according to the temporal characteristics of events. In particular, we use Bayesian Networks and probabilistically-extended Petri Nets for recognizing, respectively, simple and complex events. For demonstrating the proposed approach, a framework has been implemented for recognizing humanobject interactions in the video monitoring domain. The experimental results show that our approach improves the event recognition performance as compared to the widely used deterministic approach.", "keywords": "video event detection;semantic video analysis;bayes network;petri net;low-level uncertainty", "title": "A semantic-based probabilistic approach for real-time video event recognition"} {"abstract": "We establish the existence of a reproductive weak solution, a so-called periodic weak solution, for the equations of motion of magneto-micropolar fluids in exterior domains in R-3. ", "keywords": "navier-stokes equations;magneto-micropolar fluid;reproductive solution;exterior domain", "title": "Reproductive weak solutions of magneto-micropolar fluid equations in exterior domains"} {"abstract": "Simplified models of the vehicle structure are often used during the concept phase of vehicle development to improve the Noise, Vibration and Harshness (NVH) performance. Together with the structural joints and panels, beams are one of the constituent parts of these models. There are different approaches for their modeling and optimization handling, which however are either not able to maintain the similarity with the detailed Finite Element (FE) model (reference and/or optimized) or suffering some flexibility and performance issues. The objective of the current work is to develop and validate a new method which is an improved alternative of the existing approaches. It keeps the reference cross-sectional shapes of all 1D beams, but when each of them is rescaled during optimization, the beam is represented by means of generic cross-sectional properties. Thus a lighter and simpler representation of the concept beams is created and at the same time the connection with the detailed FE model is not broken. The feasibility of the approach is successfully verified for a set of representative beam cross-sections and then for an industrial case study. Its benefit in terms of computational time is also demonstrated. The proposed method can be easily implemented and then applied to make concept modeling for the vehicle structure faster and more flexible.", "keywords": "beam;vehicle structure;concept fe model;optimization;response surface", "title": "Beam Bounding Box a novel approach for beam concept modeling and optimization handling"} {"abstract": "Within the HCI community, there is a growing interest in how technology is used and appropriated outside the workplace. In this paper, we present preliminary findings of how large displays, projection systems, and presentation software are used in American megachurches to support religious practice. These findings are based on ten visits to church services by the study.s authors. We describe how large display technology augments and replaces certain church traditions, and finish by discussing issues related to the design for church environments that are highlighted by this use of technology.", "keywords": "large displays;religious technologies;field studies", "title": "exploring the use of large displays in american megachurches"} {"abstract": "This paper intends to describe some of the primary social and cultural dynamics in South African home-based healthcare, using ethnographic case study material and design methodology. This constitutes a detailed narrative of the \"care experience\" in a poor community, emphasising the needs of and barriers to educational information, particularly concerning caregivers. In reaction to this context, a collaborative training model - AT-HOME 2.0 - emerges through design intervention. This is foreseen as a basic framework whereby caregivers (are encouraged to) develop educational content via information and communication technologies (e. g. mobile phones and social media). Such content can and may include experiences, suggestions, and guidelines that are relevant to the practice of caregiving. AT-HOME 2.0 will conceptually - and in some cases practically - demonstrate how educational content may be generated, published, and disseminated in the sphere of home-based healthcare.", "keywords": "home-based healthcare;informal learning;digital technologies;socio-cultural dynamics", "title": "AT-HOME 2.0-An Educational Framework for Home-based Healthcare"} {"abstract": "In this paper we first identify limitations of compiler-controlled prefetching in a CC-NUMA multiprocessor with a write-invalidate cache coherence protocol. Compiler-controlled prefetch techniques for CC-NUMAs often are focused only, on stride-accesses, and this introduces a major limitation. We consider combining prefetch with two other compiler-controlled techniques to partly remedy the situation: (1) load-exclusive to reduce write-latency and (2) store-update to reduce read-latency. The purpose of each of these techniques in a machine with prefetch is to let them reduce latency for accesses which the prefetch technique could not handle. We evaluate two different scenarios, firstly with a hybrid compiler/hardware prefetch technique and secondly with an optimal stride-prefetcher. We find that the combined gains under the hybrid prefetch technique are significant for six applications we have studied: in average, 71% of the original write-stall time remains after using the hybrid prefetcher, and of these ownership-requests, 60% would be eliminated using load-exclusive; in average, 68% of the read-stall time remains after using the hybrid prefetcher and of these read-misses, 34% were serviced by remote caches and would be converted by store-update into misses serviced by a clean copy in memory which reduces the read-latency. With an optimal stride-prefetcher our results show that it beneficient to complement prefetch, with the two techniques here as well.", "keywords": "memory access latency reduction;compiler-initiated coherence;read-latency;migratory sharing;cc-numa multiprocessor;compiler-controlled prefetching;prefetch;compiler-analysis;prefetching;read-stall time;write-latency;multiprocessors;parallel architectures", "title": "overcoming limitations of prefetching in multiprocessors by compiler-initiated coherence action"} {"abstract": "Checkpointing and Communication Library (CCL) is a recently developed software implementing CPU offloaded checkpointing functionalities in support of optimistic parallel simulation on myrinet clusters. Specifically, CCL implements a non-blocking execution mode of memory-to-memory data copy associated with checkpoint operations, based on data transfer capabilities provided by a programmable DMA engine on board of myrinet network cards. Re-synchronization between CPU and DMA activities must sometimes be employed for several reasons, such as maintenance of data consistency, thus adding some overhead to (otherwise CPU cost-free) non-blocking checkpoint operations. In this paper we present a cost model for non-blocking checkpointing and derive a performance effective re-synchronization semantic which we call minimum cost re-synchronization MC . With this semantic, an occurrence of re-synchronization either commits an on-going DMA based checkpoint operation (causing suspension of CPU activities) or aborts the operation (with possible increase in the expected rollback cost due to a reduced amount of committed checkpoints) on the basis of a minimum overhead expectation evaluated through the cost model. We have implemented MC within CCL, and we also report experimental results demonstrating the performance benefits from this optimized re-synchronization semantic, in terms of increase in the execution speed, for a Personal Communication System (PCS) simulation application.", "keywords": "optimistic simulation;performance optimization;checkpointing;dma", "title": "modeling and optimization of non-blocking checkpointing for optimistic simulation on myrinet clusters"} {"abstract": "For the last few years, many classifications and marking strategies have been proposed with the consideration of video streaming applications. According to IETF recommendation, two groups of solutions have been proposed. The firts one assumes that applications or IF end points pre-mark their packets. The second solution applies the router which is topologically closest to video source. It should perform Multifield Classification and mark all incomming packets. This paper investigates the most popular marking strategies belonging to both mentioned above groups of solutions. The pre-marking strategies based on H264 coder extensions are simulated based on NS-2 network simulator and Evalvid-RA framework. The results are compared with the IETF recommendations for video traffic shaping in the IP networks and marking algorithms proposed by other researchers.", "keywords": "video streaming;ip packet;marking;h264 video coding;diffserv architecture", "title": "Efficiency of IP Packets Pre-marking for H264 Video Quality Guarantees in Streaming Applications"} {"abstract": "This article contains a proof of the MDS conjecture for k a parts per thousand currency sign 2p - 2. That is, that if S is a set of vectors of in which every subset of S of size k is a basis, where q = p (h) , p is prime and q is not and k a parts per thousand currency sign 2p - 2, then |S| a parts per thousand currency sign q + 1. It also contains a short proof of the same fact for k a parts per thousand currency sign p, for all q.", "keywords": "mds conjecture;linear codes;singleton bound", "title": "On sets of vectors of a finite vector space in which every subset of basis size is a basis II"} {"abstract": "Latent Dirichlet allocation defines hidden topics to capture latent semantics in text documents. However, it assumes that all the documents are represented by the same topics, resulting in the forced topic problem. To solve this problem, we developed a group latent Dirichlet allocation (GLDA). GLDA uses two kinds of topics: local topics and global topics. The highly related local topics are organized into groups to describe the local semantics, whereas the global topics are shared by all the documents to describe the background semantics. GLDA uses variational inference algorithms for both offline and online data. We evaluated the proposed model for topic modeling and document clustering. Our experimental results indicated that GLDA can achieve a competitive performance when compared with state-of-the-art approaches.", "keywords": "topic modeling;latent dirichlet allocation;group;variational inference;online learning;document clustering", "title": "Group topic model: organizing topics into groups"} {"abstract": "The field of digital libraries (DLs) coalesced in 1994: the first digital library conferences were held that year, awareness of the World Wide Web was accelerating, and the National Science Foundation awarded $24 Million (US) for the Digital Library Initiative (DLI). In this paper we examine the state of the DL domain after a decade of activity by applying social network analysis to the co-authorship network of the past ACM, IEEE, and joint ACM/IEEE digital library conferences. We base our analysis on a common binary undirectional network model to represent the co-authorship network, and from it we extract several established network measures. We also introduce a weighted directional network model to represent the co-authorship network, for which we define AuthorRank as an indicator of the impact of an individual author in the network. The results are validated against conference program committee members in the same period. The results show clear advantages of PageRank and AuthorRank over degree, closeness and betweenness centrality metrics. We also investigate the amount and nature of international participation in Joint Conference on Digital Libraries (JCDL).", "keywords": "digital library;authorrank;social network analysis;co-authorship", "title": "Co-authorship networks in the digital library research community"} {"abstract": "We present the first deterministic 1+epsilon approximation algorithm for finding a large matching in a bipartite graph in the semi-streaming model which requires only O((1/epsilon)(5)) passes over the input stream. In this model, the input graph G = (V, E) is given as a stream of its edges in some arbitrary order, and storage of the algorithm is bounded by O(npolylog n) bits, where n = | V |. The only previously known arbitrarily good approximation for general graphs is achieved by the randomized algorithm of McGregor (Proceedings of the InternationalWorkshop on Approximation Algorithms for Combinatorial Optimization Problems and Randomization and Computation, Berkeley, CA, USA, pp. 170-181, 2005), which uses Omega((1/epsilon) 1/epsilon) passes. We show that even for bipartite graphs, McGregor's algorithm needs Omega(1/epsilon)(Omega(1/epsilon)) passes, thus it is necessarily exponential in the approximation parameter. The design as well as the analysis of our algorithm require the introduction of some new techniques. A novelty of our algorithm is a new deterministic assignment of matching edges to augmenting paths which is responsible for the complexity reduction, and gets rid of randomization. We repeatedly grow an initial matching using augmenting paths up to a length of 2k + 1 for k = [2/epsilon]. We terminate when the number of augmenting paths found in one iteration falls below a certain threshold also depending on k, that guarantees a 1 + epsilon approximation. The main challenge is to find those augmenting paths without requiring an excessive number of passes. In each iteration, using multiple passes, we grow a set of alternating paths in parallel, considering each edge as a possible extension as it comes along in the stream. Backtracking is used on paths that fail to grow any further. Crucial are the so-called position limits: when a matching edge is the ith matching edge in a path and it is then removed by backtracking, it will only be inserted into a path again at a position strictly lesser than i. This rule strikes a balance between terminating quickly on the one hand and giving the procedure enough freedom on the other hand.", "keywords": "bipartite graph matching;streaming algorithms;approximation schemes;approximation algorithms", "title": "Bipartite Matching in the Semi-streaming Model"} {"abstract": "We investigate a correlation coefficient of principal components from two sets of variables. Using perturbation expansion, we get a limiting distribution of the correlation. In addition, we obtain a limiting distribution of the Fisher's z transformation of the above correlation. Additionally, we verify the accuracy of the limiting distributions using Monte Carlo simulations. Finally in this study, we present two examples and a bootstrap estimation.", "keywords": "principal component;perturbation method;canonical correlation analysis;fisher's z transformation;bootstrap", "title": "Correlation analysis of principal components from two populations"} {"abstract": "Numerical advantages of singular value decomposition over other least squares techniques. Elimination of statistically insignificant coefficients. Benefit of a statistical rejection procedure.", "keywords": "svd;stepwise regression;numerical analysis", "title": "Modeling by singular value decomposition and the elimination of statistically insignificant coefficients"} {"abstract": "A new linguistic aggregation rule that extends numerical majorities based on difference in support is introduced. Linguistic majorities with difference in support are formalised for fuzzy set and 2-tuples. Both representations are proved to be mathematically isomorphic. A set of normative properties have been demonstrated to hold for the new linguistic majorities.", "keywords": "social choice;aggregation rule;linguistic preferences;linguistic majorities;fuzzy sets;2-tuples;difference in support", "title": "Linguistic majorities with difference in support"} {"abstract": "In this paper an UFLS scheme for implementing in MG was proposed. This load shedding method estimates the power deficit based on the frequency first derivative. It considers power generation variations during the load shedding process. The proposed load shedding scheme is independent from MG parameters. A microgrid with several DER is adopted to demonstrate the effectiveness of the proposed method.", "keywords": "microgrid;minimum frequency;power deficit;underfrequency load shedding", "title": "An underfrequency load shedding scheme for islanded microgrids"} {"abstract": "The purpose of this paper is twofold. Firstly, a general conclusion about fuzzy entropy induced by distance measure is presented based on the axiom definitions of fuzzy entropy and distance measure. Secondly, some fuzzy entropy formulas which relate to the fuzzy entropy formula defined by De Luca and Termini (Inform. Control 20 (1972) 301) are given. ", "keywords": "measures of information;fuzzy entropy;sigma-fuzzy entropy;distance measure;sigma-distance measure", "title": "Some new fuzzy entropy formulas"} {"abstract": "Delamination occurring during the chemical and mechanical planarization process or wire bonding steps in packaging is a fundamental issue in integrating of low dielectric constant (low-k) materials into the multilayer structures of semiconductor chips. Since it is known that low adhesion strength is mainly attributed to the failure phenomenon, the measurement of interfacial fracture toughness is critical to provide a quantitative basis in the choice of the materials. In this study, a modified edge lift-off test was adopted to measure the fracture toughness of polymethylsilsesquioxane based low-k materials with various chemical and physical structures. Interfacial fracture toughness was improved by adding multi-functional monomers to methylsilsesquioxane monomers or by increasing the percentage of functional end groups inside the prepolymers. In addition, the change in curing conditions and thickness influenced the adhesion performance presumably by changing the morphology of low-k materials.", "keywords": "adhesion;thin film;low-k dielectrics;xps", "title": "Adhesion properties of polymethylsilsesquioxane based low dielectric constant materials by the modified edge lift-off test"} {"abstract": "This paper presents a shared-memory self-stabilizing failure detector, asynchronous consensus and replicated state-machine algorithm suite, the components of which can be started in an arbitrary state and converge to act as a virtual state-machine. Self-stabilizing algorithms can cope with transient faults. Transient faults can alter the system state to an arbitrary state and hence, cause a temporary violation of the safety property of the consensus. Started in an arbitrary state, the long lived, memory bounded and self-stabilizing failure detector, asynchronous consensus, and replicated state-machine suite, presented in the paper, recovers to satisfy eventual safety and eventual liveness requirements. Several new techniques and paradigms are introduced. The bounded memory failure detector abstracts away synchronization assumptions using bounded heartbeat counters combined with a balance-unbalance mechanism. The practically infinite paradigm is introduced in the scope of self-stabilization, where an execution of, say, 2(64) sequential steps is regarded as (practically) infinite. Finally, we present the first self-stabilizing wait-free reset mechanism that ensures eventual safety and can be used to implement efficient self-stabilizing timestamps that are of independent interest. ", "keywords": "failure detector;consensus;state-machine;wait-free;distributed reset;self-stabilization", "title": "When consensus meets self-stabilization"} {"abstract": "The analysis of structures is affected by uncertainty in the structures material properties, geometric parameters, boundary conditions and applied loads. These uncertainties can be modelled by random variables and random fields. Amongst the various problems affected by uncertainty, the random eigenvalue problem is specially important when analyzing the dynamic behavior or the buckling of a structure. The methods that stand out in dealing with the random eigenvalue problem are the perturbation method and methods based on Monte Carlo Simulation. In the past few years, methods based on Polynomial Chaos (PC) have been developed for this problem, where each eigenvalue and eigenvector are represented by a PC expansion. In this paper four variants of a method hybridizing perturbation and PC expansion approaches are proposed and compared. The methods use Rayleigh quotient, the power method, the inverse power method and the eigenvalue equation. PC expansions of eigenvalues and eigenvectors are obtained with the proposed methods. The new methods are applied to the problem of an Euler Bernoulli beam and a thin plate with stochastic properties.", "keywords": "stochastic finite element method;random eigenvalue problem;perturbation;polynomial chaos;iterative methods", "title": "Hybrid perturbation-Polynomial Chaos approaches to the random algebraic eigenvalue problem"} {"abstract": "The work presented in this paper shows that the mixed-type scheme of Murman and Cole, originally developed for a scalar equation, can be extended to systems of conservation laws. A characteristic scheme for the equations of gas dynamics is introduced that has a close connection to a four operator scheme for the Burgers-Hopf equation. The results indicate that the scheme performs well on the classical test cases. The scheme has no tuning parameters and can be interpreted as the projection of an L-infinity-stable scheme. At steady state second order accuracy is obtained as a by-product of the box-scheme feature. ", "keywords": "conservative box-scheme;euler equations;gas dynamics", "title": "A conservative box-scheme for the Euler equations"} {"abstract": "In this paper we introduce a novel end-to-end approach for achieving the dual goal of enhanced reliability under path failures, and multi-path load balancing in mobile ad hoc networks (MANETs). These goals are achieved by fully exploiting the presence of multiple paths in mobile ad hoc networks in order to jointly attack the problems of frequent route failures and load balancing. More specifically, we built a disjoint-path identification mechanism for maintaining multiple routes between two endpoints on top of the Stream Control Transmission Protocol (SCTP), and the Dynamic Source Routing (DSR) protocol. A number of additional modifications are incorporated to the SCTP protocol in order to allow its smooth operation. The proposed approach differs from previously related work since it consists of an entirely end-to-end scheme built on top of a transport layer protocol. We provide both analytical and simulation results that prove the efficiency of our approach over a wide range of mobility scenarios.", "keywords": "mobile ad hoc networks;sctp;dsr;reliability;load balancing", "title": "Using a new protocol to enhance path reliability and realize load balancing in mobile ad hoc networks"} {"abstract": "Sogenannte digital natives sind mittlerweile auch auf den obersten Fhrungsebenen von Organisationen zu finden. Diese neue Managergeneration betrachtet Managementuntersttzungssysteme (MUS) mittlerweile als eine Selbstverstndlichkeit, hat aber auch zunehmend hohe Erwartungen, dass diese ihre individuellen Nutzungsprferenzen erfllen. Dementsprechend hinterfragen sie MUS, die keine Anpassungsmechanismen fr ihren jeweiligen Arbeitsstil, die verschiedenen relevanten MUS-Nutzungsflle und die unterschiedlichen MUS-Zugangsmglichkeiten vorsehen. Dieser Beitrag zeigt verschiedene Nutzungssituationen von Fhrungskrften auf, definiert als Klassen hnlicher Nutzergruppenprferenzen und schlgt Stellhebel vor, um die MUS-Gestaltung konzeptionell daran anzupassen. Basierend auf den Ergebnissen einer Literaturrecherche werden zunchst Nutzergruppenprferenzen in Form von 36 Nutzungssituationen klassifiziert. Hierauf aufbauend machen wir Vorschlge zur Endgerteauswahl. Wir vervollstndigen das Konfigurationsmodell, indem wir auch die Gestaltung der MUS-Benutzerschnittstelle einbeziehen. Schlielich zeigen wir die Ntzlichkeit unseres Vorschlags mithilfe einer Pilotumsetzung auf und evaluieren diese.", "keywords": "managementuntersttzungssysteme ;is-analyse und -gestaltung;nutzergruppenprferenzen;nutzungsfaktoren in der mus-gestaltung;mus-konfiguration;management support systems ;is analysis and design;user-group preferences;use factors in mss design;mss configuration", "title": "Situative Managementuntersttzungssysteme"} {"abstract": "An abstract regular polytope P of rank n can only be realized faithfully in Euclidean space E(d) of dimension d if d >= n when P is finite, or d >= n - 1 when P is infinite (that is, P is an apeirotope). In case of equality, the realization P of P is said to be of full rank. If there is a faithful realization P of P of dimension d = n + 1 or d = n ( as P is finite or not), then P is said to be of nearly full rank. In previous papers, all the at most four-dimensional regular polytopes and apeirotopes of nearly full rank have been classified. This paper classifies the regular polytopes and apeirotopes of nearly full rank in all higher dimensions.", "keywords": "abstract regular polytope;realization;faithful;nearly full rank;fine schlafli symbol", "title": "Regular Polytopes of Nearly Full Rank"} {"abstract": "When a decision maker expresses his/her opinions by means of an interval reciprocal comparison matrix, the study of consistency becomes a very important aspect in decision making in order to avoid a misleading solution. In the present paper. an acceptably consistent interval reciprocal comparison matrix is defined, which can be reduced to an acceptably consistent crisp reciprocal comparison matrix when the intervals become exact numbers. An interval reciprocal comparison matrix with unacceptable consistency can be easily adjusted such that the revised matrix possesses acceptable consistency. Utilizing a convex combination method, a family of crisp reciprocal comparison matrices with acceptable consistency can be obtained, whose weights are further found to exhibit a style of convex combination, and aggregated to obtain interval weights from an acceptably consistent interval reciprocal comparison matrix. A novel, simple yet effective formula of possibility degree is presented to rank interval weights. Numerical results are calculated to show the quality and quantity of the proposed approaches and compare with other existing procedures. ", "keywords": "interval reciprocal comparison matrix;acceptable consistency;convex combination;interval weight;possibility degree formula", "title": "Acceptable consistency analysis of interval reciprocal comparison matrices"} {"abstract": "This article compares and contrasts two technologies for delivering broadband wireless Internet access services: 3G vs. WiFi. The former, 3G, refers to the collection of third-generation mobile technologies that are designed to allow mobile operators to offer integrated data and voice services over mobile networks. The latter, WiFi, refers to the 802.11b wireless Ethernet standard that was designed to support wireless LANs. Although the two technologies reflect fundamentally different service, industry, and architectural design goals, origins, and philosophies, each has recently attracted a lot of attention as candidates for the dominant platform for providing broadband wireless access to the Internet. It remains an open question as to the extent to which these two technologies are in competition or, perhaps, may be complementary. If they are viewed as in competition, then the triumph of one at the expense of the other would be likely to have profound implications for the evolution of the wireless Internet and structure of the service-provider industry.", "keywords": "internet;broadband;wireless;3g;wlan;ethernet;access;spectrum;economics;industry structure", "title": "Wireless Internet access: 3G vs. WiFi?"} {"abstract": "This work considers the problem of controlling multiple nonholonomic vehicles so that they converge to a scent source without colliding with each other. Since the control is to be implemented on a simple 8-bit microcontroller, fuzzy control rules are used to simplify a linear quadratic regulator control design. The inputs to the fuzzy controllers for each vehicle are the noisy direction to the source, the distance to the closest neighbor vehicle, and the direction to the closest vehicle. These directions are discretized into four values: forward, behind, left, and right; and the distance into three values: near, far, and gone. The values of the control at these discrete values are obtained based on the collision-avoidance repulsive forces and an attractive force towards the goal. A fuzzy inference system is used to obtain control values from a small number of discrete input values. Simulation results are provided which demonstrate that the fuzzy control law performs well compared to the exact controller. In fact, the fuzzy controller demonstrates improved robustness to noise.", "keywords": "fuzzy logic;mobile robots;decentralized;linear quadratic;group behavior;cooperative robotics;kalman estimation", "title": "Decentralized fuzzy control of multiple nonholonomic vehicles"} {"abstract": "We study the optimal configuration of p-cycles in survivable wavelength division multiplexing (WDM) optical mesh networks with sparse-partial wavelength conversion while 100% restorability is guaranteed against any single failures. We formulate the problem as two integer linear programs (Optimization Models I, and II) which have the same constraints, but different objective functions. p-cycles and wavelength converters are optimally determined subject to the constraint that only a given number of nodes have wavelength conversion capability, and the maximum number of wavelength converters that can be placed at such nodes is limited. Optimization Model I has a composite sequential objective function that first (G1) minimizes the cost of link capacity used by all p-cycles in order to accommodate a set of traffic demands; and then (G2) minimizes the total number of wavelength converters used in the entire network. In Optimization Model II, the cost of one wavelength. converter is measured as the cost of a deployed wavelength link with a length of a units; and the objective is to minimize the total cost of link capacity & wavelength converters required by p-cycle configuration. During p-cycle configuration, our schemes fully takes into account wavelength converter sharing, which reduces the number of converters required while attaining a satisfactory level of performance. Our simulation results indicate that the proposed schemes significantly outperform existing approaches in terms of protection cost, number of wavelength conversion sites, and number of wavelength converters needed.", "keywords": "converter sharing;integer linear programming;optimal p-cycle configuration;sparse-partial wavelength conversion;wdm optical networks", "title": "On optimal p-cycle-based protection in WDM optical networks with sparse-partial wavelength conversion"} {"abstract": "The medical training concerning childbirth for young obstetricians involves performing real deliveries, under supervision. This medical procedure becomes more complicated when instrumented deliveries requiring the use of forceps or suction cups become necessary. For this reason, the use of a versatile, configurable childbirth simulator, taking into account different anatomical and pathological cases, would provide an important benefit in the training of obstetricians, and improve medical procedures. The production of this type of simulator should be generally based on a computerized birth simulation, enabling the computation of the reproductive organs deformation of the parturient woman and fetal interactions as well as the calculation of efforts produced during the second stage of labor. In this paper, we present a geometrical and biomechanical modeling of the main parturient's organs involved in the birth process, interacting with the fetus. Instead of searching for absolute precision, we search to find a good compromise between accuracy and model complexity. At this stage, to verify the correctness of our hypothesis, we use finite element analysis because of its reliability, precision and stability. Moreover, our study improves the previous work carried out on childbirth simulators because: (a) our childbirth model takes into account all the major organs involved in birth process, thus potentially enabling different childbirth scenarios; (b) fetal head is not treated as a rigid body and its motion is computed by taking into account realistic boundary conditions, i.e. we do not impose a pre-computed fetal trajectory; (c) we take into account the cyclic uterine contractions as well as voluntary efforts produced by the muscles of the abdomen; (d) a slight pressure is added inside the abdomen, representing the residual muscle tone. The next stage of our work will concern the optimization of our numerical resolution approach to obtain interactive time simulation, enabling it to be coupled to our haptic device.", "keywords": "biomechanical modeling of organs;fetal descent;finite element model;medical training", "title": "Biomechanical simulation of the fetal descent without imposed theoretical trajectory"} {"abstract": "Complex patterns in neuronal networks emerge from the cooperative activity of the participating neurons, synaptic connectivity and network topology. Several neuron types exhibit complex intrinsic dynamics due to the presence of nonlinearities and multiple time scales. In this paper we extend previous work on hyperexcitability of neuronal networks, a hallmark of epileptic brain seizure generation, which results from the net imbalance between excitation and inhibition and the ability of certain neuron types to exhibit abrupt transitions between low and high firing frequency regimes as the levels of recurrent AMPA excitation change. We examine the effect of different topologies and connection delays on the hyperexcitability phenomenon in networks having recurrent synaptic AMPA (fast) excitation (in the absence of synaptic inhibition) and demonstrate the emergence of additional time scales.", "keywords": "neuronal networks;synchronization", "title": "Complex patterns in networks of hyperexcitable neurons"} {"abstract": "In order to support personalized people comfort and building energy efficiency as well as safety, emergency, and context-aware information exchange scenarios, next-generation buildings will be smart. In this paper we propose an agent-oriented decentralized and embedded architecture based on wireless sensor and actuator networks (WSANs) for enabling efficient and effective management of buildings. The main objective of the proposed architecture is to fully support distributed and coordinated sensing and actuation operations. The building management architecture is implemented at the WSAN side through MAPS (Mobile Agent Platform for Sun SPOTs), an agent-based framework for programming WSN applications based on the Sun SPOT sensor platform, and at the base station side through an OSGi-based application. The proposed agent-oriented architecture is demonstrated in a simple yet effective operating scenario related to monitoring workstation usage in computer laboratories/offices. The high modularity of the proposed architecture allows for easy adaptation of higher-level application-specific agents that can therefore exploit the architecture to implement intelligent building management policies.", "keywords": "smart buildings;multi-agent systems;wireless sensor and actuator networks;building management systems", "title": "Decentralized Management of Building Indoors through Embedded Software Agents"} {"abstract": "Electronic Governance (EGOV) research studies the use of Information and Communication Technologies to improve governance processes. Sustainable Development (SD) research studies possible development routes that satisfy the needs of the present generation without compromising the ability of the future generations to meet their own needs. Despite substantial progress in advancing both domains independently, little research exists at their intersection how to utilize EGOV in support of SD. We call this intersection Electronic Governance for Sustainable Development (EGOV4SD). This paper: 1) proposes a conceptual framework for EGOV4SD, 2) proposes EGOV4SD research assessment framework and 3) applies both frameworks to determine the state of EGOV4SD research. The main contribution of the paper is establishing a foundation for EGOV4SD research.", "keywords": "electronic governance;sustainable development;electronic governance for sustainable development;meta-research", "title": "Electronic Governance for Sustainable Development Conceptual framework and state of research"} {"abstract": "One of the decisions of major impact on crop profitability is the selection of the genotype to sow. SelGen is a software that makes it possible to evaluate and dynamically compare genotypes; this allows a faster selection of the suitable genotype. The software processes, according to the options selected by the user, the information contained in a database regarding the genotype yield in different environments. The results are shown in graphs and tables. The database included in the program is updated twice a year. Users can analyze their own databases.", "keywords": "genotype selection;crop yield;evaluation system", "title": "SelGen: System to dynamically evaluate and compare the agronomic behavior of genotypes that participate in networks of comparative yield trials"} {"abstract": "The production process of mineral wool is affected by several constantly changing factors. The ingredients for the mineral wool are melted in a furnace. The molten mineral charge exits the bottom of the furnace in a water-cooled trough and falls into a fiberization device (the centrifuge). The centrifuge forms the fibers. At this stage binders are injected to bind the fibers together. To ensure the quality of the end product (the consistent thickness) the flow of the bounded fibers must be as constant as possible. One way to ensure that is to control the speed of the conveyor belt that transports the bounded fibers from the centrifuge to the curing process. Predictive functional controller and PID controller are considered to replace an existing algorithm. Both can easily replace an existing one as they do not require any new sensor installation. All three algorithms are presented and tested on a developed plant model. The study showed that the predictive control gives better results than the existing and PID controller.", "keywords": "stone wool process;predictive functional control;thickness control;conveyor belt control", "title": "Control of mineral wool thickness using predictive functional control"} {"abstract": "MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets.", "keywords": "systems biology;genomics;microrna;bioinformatics;zebrafish", "title": "Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools"} {"abstract": "CAD-CAM integration has involved either design with standard manufacturing features (feature-based design), or interpretation of a solid model based on a set of predetermined feature patterns (automatic feature recognition). Thus existing approaches are limited in application to predefined features, and also disregard the dynamic nature of the process and tool availability in the manufacturing shop floor. To overcome this problem, we develop a process oriented approach to design interpretation, and model the shape producing capabilities of the tools into tool classes. We then interpret the part by matching regions of it with the tool classes directly. In addition, there could be more than one way in which a part can be interpreted, and to obtain an optimal plan, it is necessary for an integrated computer aided process planning system to examine these alternatives. We develop a systematic search algorithm to generate the different interpretations, and a heuristic approach to sequence operations (set-ups/tools) for the features of the interpretations generated. The heuristic operation sequencing algorithm considers features and their manufacturing constraints (precedences) simultaneously, to optimally allocate set-ups and tools for the various features. The modules within the design interpretation and process planner are linked through an abstracted qualitative model of feature interactions. Such an abstract representation is convenient for geometric reasoning tasks associated with planning and design interpretation.", "keywords": "computer-aided process planning;qualitative spatial reasoning;design interpretation;feature recognition;feature-based manufacturing;tool/process capability matching", "title": "Integrated process planning using tool/process capabilities and heuristic search"} {"abstract": "In this paper, new approaches of the variational iteration method are developed to handle nonlinear problems. The proposed approaches are capable of reducing the size of calculations and easily overcome the difficulty arising in calculating complicated integrals. Numerical examples are examined to show the efficiency of the techniques. The modified approaches show improvements over the existing numerical schemes. ", "keywords": "variational iteration method;lagrange multiplier;klein-gordon equation;sine-gordon equation", "title": "Reliable approaches of variational iteration method for nonlinear operators"} {"abstract": "The following known observation is useful in establishing program termination: if a transitive relation R is covered by finitely many well-founded relations U(1),..., U(n) then R is well-founded. A question arises how to bound the ordinal height vertical bar R vertical bar of the relation R in terms of the ordinals alpha(i) = vertical bar U(i) vertical bar. We introduce the notion of the stature parallel to P parallel to of a well partial ordering P and show that vertical bar R vertical bar <= parallel to alpha(1) x ... x alpha(n) parallel to and that this bound is tight. The notion of stature is of considerable independent interest. We define parallel to P parallel to as the ordinal height of the forest of nonempty bad sequences of P, but it has many other natural and equivalent definitions. In particular, parallel to P parallel to is the supremum, and in fact the maximum, of the lengths of linearizations of P. And parallel to alpha(1) x ... x alpha(n) parallel to is equal to the natural product alpha(1) circle times ... circle times alpha(n).", "keywords": "algorithms;theory;program termination;well partial orderings;covering observation;game criterion", "title": "Program termination and well partial orderings"} {"abstract": "This paper presents an approach to recover time variant information from software repositories. It is widely accepted that software evolves due to factors such as defect removal, market opportunity or adding new features. Software evolution details are stored in software repositories which often contain the changes history. On the other hand there is a lack of approaches, technologies and methods to efficiently extract and represent time dependent information. Disciplines such as signal and image processing or speech recognition adopt frequency domain representations to mitigate differences of signals evolving in time. Inspired by time-frequency duality, this paper proposes the use of Linear Predictive Coding (LPC) and Cepstrum coefficients to model time varying software artifact histories. LPC or Cepstrum allow obtaining very compact representations with linear complexity. These representations can be used to highlight components and artifacts evolved in the same way or with very similar evolution patterns. To assess the proposed approach we applied LPC and Cepstral analysis to 211 Linux kernel releases (i.e., from 1.0 to 1.3.100), to identify files with very similar size histories. The approach, the preliminary results and the lesson learned are presented in this paper.", "keywords": "software evolution;data mining", "title": "linear predictive coding and cepstrum coefficients for mining time variant information from software repositories"} {"abstract": "We develop and evaluate a data-driven approach for detecting unusual (anomalous) patient-management decisions using past patient cases stored in electronic health records (EHRs). Our hypothesis is that a patient-management decision that is unusual with respect to past patient care may be due to an error and that it is worthwhile to generate an alert if such a decision is encountered. We evaluate this hypothesis using data obtained from EHRs of 4486 post-cardiac surgical patients and a subset of 222 alerts generated from the data. We base the evaluation on the opinions of a panel of experts. The results of the study support our hypothesis that the outlier-based alerting can lead to promising true alert rates. We observed true alert rates that ranged from 25% to 66% for a variety of patient-management actions, with 66% corresponding to the strongest outliers.", "keywords": "machine learning;clinical alerting;conditional outlier detection;medical errors", "title": "Outlier detection for patient monitoring and alerting"} {"abstract": "We present a correspondence-based system for visual object recognition with invariance to position, orientation, scale and deformation. The system is intermediate between high- and low-dimensional representations of correspondences. The essence of the approach is based on higher-order links, called here maplets, which are specific to narrow ranges of mapping parameters (position, scale and orientation), which interact cooperatively with each other, and which are assumed to be formed by learning. While being based on dynamic links, the system overcomes previous problems with that formulation in terms of speed of convergence and range of allowed variation. We perform face recognition experiments, comparing ours to other published systems. We see our work as a step towards a reformulation of neural dynamics that includes rapid network self-organization as essential aspect of brain state organization.", "keywords": "object recognition;correspondence;dynamic link;map formation;self-organization;maplet", "title": "Maplets for correspondence-based object recognition"} {"abstract": "Human multitasking is often the result of self-initiated interruptions in the performance of an ongoing task. These self-interruptions occur in the absence of external triggers such as electronic alerts or email notifications. Compared to externally induced interruptions, self-interruptions have not received enough research attention. To address this gap, this paper develops a typology of self-interruptions based on the integration of Flow Theory and Self-regulation Theory. In this new typology, the two major categories stem from positive and negative feelings of task progress and prospects of goal attainment. The proposed classification is validated in an experimental multitasking environment with pre-defined tasks. Empirical findings indicate that negative feelings trigger more self-interruptions than positive feelings. In general, more self-interruptions result in lower accuracy in all tasks. The results suggest that negative internal triggers of self-interruptions unleash a downward spiral that may degrade performance.", "keywords": "multitasking;interruptions;self-interruptions;performance;flow", "title": "Self-interruptions in discretionary multitasking"} {"abstract": "Premature convergence is a major challenge for particle swarm optimization algorithm (PSO) when dealing with multi-modal problems. The reason is partly due to the insufficient exploration capability because of the fast convergent speed especially in the final stage. In this paper, the PSO is regarded as a two-inputs one-output feedback system, and two PID controllers are incorporated into the methodology of PSO to improve the population diversity. Different from the integral controller, PID controller has three independent parameters and adjusts them dynamically. Theoretical results with support set theory and stability analysis both demonstrate that PID controller provides more chances to escaping from a local optimum. To validate the efficiency of this new variant, four other famous variants are used to compare including the comprehensive leaning PSO, modified time-varying accelerator coefficients PSO, integral-controlled PSO and the standard version, the test suit consists five unconstrained numerical benchmarks with dimensionality 30 and 100, respectively. Simulation results show PID-controlled PSO is suitable for high-dimensional multi-modal problems due to the large exploration capability in the final stage.", "keywords": "particle swarm optimization;pid controller;support set theory;stability analysis", "title": "PID-Controlled Particle Swarm Optimization"} {"abstract": "Motivated by the computational complexity of determining whether a graph is hamiltonian, we study under algorithmic aspects a class of polyhedra called k-pyramids, introduced in [31], and discuss related applications. We prove that determining whether a given graph is the 1-skeleton of a k-pyramid, and if so whether it is belted or not, can be done in polynomial time for k?3 k ? 3 . The impact on hamiltonicity follows from the traceability of all 2-pyramids and non-belted 3-pyramids, and from the hamiltonicity of all non-belted 2-pyramids. The algorithm can also be used to determine the outcome for larger values of k, but the complexity increases exponentially with k. Lastly, we present applications of the algorithm, and improve the known bounds for the minimal cardinality of systems of bases called foundations in graph families with interesting properties concerning traceability and hamiltonicity.", "keywords": "pyramid;prism;halin graph;hamiltonian", "title": "Small k-pyramids and the complexity of determining k"} {"abstract": "Performance of cooperative relaying employing infrastructure based fixed relays having multiple antennas has been investigated. Employing MGF based approach, closed form expression for outage probability and bit error rate performance of BPSK signal have been derived, when relay and destination are assumed to perform MRC combining of the signals. The effect of relay placement on the system performance has also been studied under different path loss conditions.", "keywords": "outage probability;bit error rate for bpsk;mrc combining;multi-antenna relay;decode and forward mode", "title": "Performance of MRC combining multi-antenna cooperative relay network"} {"abstract": "In this paper, we propose novel lower and upper bounds on the average symbol error rate (SER) of the dual-branch maximal-ratio combining and equal-gain combining diversity receivers assuming independent branches. (M)-ary pulse amplitude modulation and (M)-ary phase shift keying schemes are employed and operation over the (alpha -mu ) fading channel is assumed. The proposed bounds are given in closed form and are very simple to calculate as they are composed of a double finite summation of basic functions that are readily available in the commercial software packages. Furthermore, the proposed bounds are valid for any combination of the parameters (alpha ) and (mu ) as well as (M). Numerical results presented show that the proposed bounds are very tight when compared to the exact SER obtained via performing the exact integrations numerically making them an attractive much simpler alternative for SER evaluation studies.", "keywords": " fading;maximal-ratio combining;equal-gain combining;symbol error rate;approximation;bounds", "title": "Novel Tight Closed-Form Bounds for the Symbol Error Rate of EGC and MRC Diversity Receivers Employing Linear Modulations Over (alpha -mu ) Fading"} {"abstract": "Objective: To review evaluation literature concerning people, organizational, and social issues and provide recommendations for future research. Method: Analyze this research and make recommendations. Results and Conclusions: Evaluation research is key in identifying how people, organizational, and social issues - all crucial to system design, development, implementation, and use - interplay with informatics projects. Building on a long history of contributions and using a variety of methods, researchers continue developing evaluation theories and methods while producing significant interesting studies. We recommend that future research: I) Address concerns of the many individuals involved in or affected by informatics applications. 2) Conduct studies in different type and size sites, and with different scopes of systems and different groups of users. Do multi site or multi-system comparative studies. 3) Incorporate evaluation into all phases of a project. 4) Study failures, partial successes, and changes in project definition or outcome. 5) Employ evaluation approaches that take account of the shifting nature of health care and project environments, and do formative evaluations. 6) Incorporate people, social, organizational, cultural, and concomitant ethical issues into the mainstream of medical informatics. 7) Diversify research approaches and continue to develop new approaches. 8) Conduct investigations at different levels of analysis. 9) Integrate findings from different applications and contextual settings, different areas of health care, studies in other disciplines, and also work that is not published in traditional research outlets. 10) Develop and test theory to inform both further evaluation research and informatics practice.", "keywords": "evaluation;technology assessment;medical informatics;telemedicine;organizational culture;attitudes towards computers;implementation;barriers;people, organizational, social issues;sociotechnical;ethical issues;qualitative methods;ethnographic methods;multi-method;human-computer interaction", "title": "Future directions in evaluation research: People, organizational, and social issues"} {"abstract": "In order to increase safety in Swedish farming an intervention methodology to influence attitudes and behaviour was tested. Eightyeight farmers and farm workers in nine groups gathered on seven occasions during 1 year. The basic concept was to create socially supportive networks and encourage discussions and reflection, focusing on risk manageability. Six of the groups made structured incident/accident analyses. Three of the latter groups also received information on risks and accident consequences. Effects were evaluated in a pre-post questionnaire using six-graded scales. A significant increase in safety activity and significant reduction in stress and risk acceptance was observed in the total sample. Risk perception and perceived risk manageability did not change. Analysing incidents/accidents, but not receiving information, showed a more positive outcome. Qualitative data indicated good feasibility and that the long duration of the intervention was perceived as necessary. The socially supportive network was reported as beneficial for the change process.", "keywords": "long-term safety intervention;attitude and behavioural change;farming", "title": "An intervention method for occupational safety in farming evaluation of the effect and process"} {"abstract": "In this paper we describe a performance model of the Parallel Ocean Program (POP). In particular, the latest version of POP (v2.0) is considered, which has similarities and differences to the earlier version (v1.4.3) as commonly used in climate simulations. The performance model encapsulates an understanding of POP's data decomposition, processing flow, and scaling characteristics. The model is parametrized in many of the main input parameters to POP as well as characteristics of a processing system such as network latency and bandwidth. The performance model has been validated to date on a medium-sized (128 processor) AlphaServer ES40 system with the QsNet-1 interconnection network, and also on a larger scale (2048 processor) Blue Gene/Light system. The accuracy of the performance model is high when using two standard benchmark configurations, one of which represents a realistic configuration similar to that used in Community Climate System Model coupled climate simulations. The performance model is also used to explore the performance of POP after possible optimizations to the code, and different task to processor assignment strategies, whose performance cannot be currently measured.", "keywords": "performance modeling;large-scale systems;performance analysis;ocean modeling", "title": "A performance model of the Parallel Ocean Program"} {"abstract": "Based on the remodeling of glycosphingolipids on the human tumor cell lines with manipulation of glycosyltransferase genes, roles of sugar moieties in tumor-associated carbohydrate antigens have been analyzed. Two main topics, that is, the roles of ganglioside GD3 in human malignant melanomas and those of GD2 in small cell lung cancer (SCLC) were reported. GD3 enhances tyrosine phosphorylation of two adaptor molecules, p130Cas and paxillin, resulting in the increased cell growth and invasion in melanoma cells. GD2 also enhances the proliferation and invasion of SCLC cells. GD2 also mediates apoptosis with anti-GD2 monoclonal antibodies (mAbs) via dephosphorylation of the focal adhesion kinase. These approaches have promoted further understanding of mechanisms by which gangliosides modulate malignant properties of human cancer, and the results obtained here propose novel targets for cancer therapy", "keywords": "glycolipids;gd3;gd2;melanoma;lung cancer;proliferation;invasion", "title": "Biosignals Modulated by Tumor-Associated Carbohydrate Antigens"} {"abstract": "This multidisciplinary research presents a novel hybrid intelligent system to perform a multi-objective industrial parameter optimization process. The intelligent system is based on the application of evolutionary and neural computation in conjunction with identification systems, which makes it possible to optimize the implementation conditions in the manufacturing process of high precision parts, including finishing precision, while saving time, financial costs and/or energy. Empirical verification of the proposed hybrid intelligent system is performed in a real industrial domain, where a case study is defined and analyzed. The experiments are carried out based on real dental milling processes using a high precision machining centre with five axes, requiring high finishing precision of measures in micrometers with a large number of process factors to analyze. The results of the experiments which validate the performance of the proposed approach are presented in this study.", "keywords": "hybrid intelligent system;dental milling process;optimization;unsupervised learning;identification systems;multi-objective optimization", "title": "A novel hybrid intelligent system for multi-objective machine parameter optimization"} {"abstract": "A general approach for modeling the motion of rigid or deformable objects in viscous flows is presented. It is shown that the rotation of a 3D object in a viscous fluid, regardless of the mechanical property and shape of the object, is defined by a common and simple differential equation, d Q / d t = Q , where Q is a matrix defined by the orientation of the object and ? is the angular velocity tensor of the object. The difference between individual cases lies only in the formulation for the angular velocity. Thus the above equation, together with Jeffery's theory for the angular velocity of rigid ellipsoids, describes the motion of rigid ellipsoids in viscous flows. The same equation, together with Eshelby's theory for the angular velocity of deformable ellipsoids, describes the motion of deformable ellipsoids in viscous flows. Both problems are solved here numerically by a general approach that is much simpler conceptually and more economic computationally, compared to previous approaches that consider the problems separately and require numerical solutions to coupled differential equations about Euler angles or spherical (polar coordinate) angles. A RungeKutta approximation is constructed for solving the above general differential equation. Singular cases of Eshelby's equations when the object is spheroidal or spherical are handled in this paper in a much simpler way than in previous work. The computational procedure can be readily implemented in any modern mathematics application that handles matrix operations. Four MathCad Worksheets are provided for modeling the motion of a single rigid or deformable ellipsoid immersed in viscous fluids, as well as the evolution of a system of noninteracting rigid or deformable ellipsoids embedded in viscous flows.", "keywords": "jeffery's theory;eshelby's theory;clast rotation;preferred orientation;viscous flow;numerical modeling", "title": "A general approach for modeling the motion of rigid and deformable ellipsoids in ductile flows"} {"abstract": "A table constraint is explicitly represented as its set of solutions or non-solutions. This ad hoc (or extensional) representation may require space exponential to the arity of the constraint, making enforcing GAC expensive. In this paper, we address the space and time inefficiencies simultaneously by presenting the mddc constraint. mddc is a global constraint that represents its (non-)solutions with a multi-valued decision diagram (MDD). The MDD-based representation has the advantage that it can be exponentially smaller than a table. The associated GAC algorithm (called mddc) has time complexity linear to the size of the MDD, and achieves full incrementality in constant time. In addition, we show how to convert a positive or negative table constraint into an mddc constraint in time linear to the size of the table. Our experiments on structured problems, car sequencing and still-life, show that mddc is also a fast GAC algorithm for some global constraints such as sequence and regular. We also show that mddc is faster than the state-of-the-art generic GAC algorithms in Gent et al. (2007), Lecoutre and Szymanek (2006), Lhomme and R,gin (2005) for table constraint.", "keywords": "ad hoc constraint;global constraint;table constraint;positive constraint;negative constraint;multi-valued decision diagram;generalized arc consistency", "title": "An MDD-based generalized arc consistency algorithm for positive and negative table constraints and some global constraints"} {"abstract": "A model on the effects of leader, media, viruses, worms, and other agents on the opinion of individuals is developed and utilized to simulate the formation of consensus in society and price in market via excess between supply and demand. The effects of some time varying drives (harmonic and hyperbolic) are also investigated.", "keywords": "opinion;leader;media;market;buyers;sellers;excess", "title": "Opinion dynamics driven by leaders, media, viruses and worms"} {"abstract": "The quantron is a hybrid neuron model related to perceptrons and spiking neurons. The activation of the quantron is determined by the maximum of a sum of input signals, which is difficult to use in classical learning algorithms. Thus, training the quantron to solve classification problems requires heuristic methods such as direct search. In this paper, we present an approximation of the quantron trainable by gradient search. We show this approximation improves the classification performance of direct search solutions. We also compare the quantron and the perceptron's performance in solving the IRIS classification problem.", "keywords": "quantron;spiking neuron;learning algorithm;gradient search;iris classification problem", "title": "ON THE LEARNING POTENTIAL OF THE APPROXIMATED QUANTRON"} {"abstract": "Trust management represents today a promising approach for supporting access control in open environments. While several approaches have been proposed for trust management and significant steps have been made in this direction, a major obstacle that still exists in the realization of the benefits of this paradigm is represented by the lack of adequate support in the DBMS.In this paper, we present a design that can be used to implement trust management within current relational DBMSs. We propose a trust model with a SQL syntax and illustrate the main issues arising in the implementation of the model in a relational DBMS. Specific attention is paid to the efficient verification of a delegation path for certificates. This effort permits a relatively inexpensive realization of the services of an advanced trust management model within current relational DBMSs.", "keywords": "access control;relational dbms;credentials;trust", "title": "trust management services in relational databases"} {"abstract": "Given a linear differential equation with known finite differential Galois group, we discuss methods to construct the minimal polynomial of a solution. We first outline a well known general method involving a basis transformation of the basis of formal solutions at a singular point. In the second part we construct directly the minimal polynomial of an eigenvector of the monodromy matrix at a singular point. The method is very efficient for irreducible second and third order linear differential equations where a one dimensional eigenspace of some monodromy matrix always exists.", "keywords": "linear differential equations;differential galois theory;algebraic solutions;liouvillian solutions", "title": "Note on algebraic solutions of differential equations with known finite Galois group"} {"abstract": "Recently, Kuwakado and Tanaka proposed a transitive signature scheme for directed trees. In this letter, we show that Kuwakado-Tanaka scheme is insecure against a forgery attack, in which an attacker is able to forge edge signatures by composing edge signatures provided by a signer.", "keywords": "cryptography;transitive signature;forgery attack", "title": "Security of Kuwakado-Tanaka transitive signature scheme for directed trees"} {"abstract": "Pervasive social computing is a new paradigm of computer science that aims to facilitate the realization of activities in whichever context, with the aid of information devices and considering social relations between users. This vision requires means to support the shared experiences by harnessing the communication and computing capabilities of the connected devices, relying on direct or hop-by-hop communications among people who happen to be close to each other. In this paper, we present an approach to turn mobile ad-hoc networks (MANETs) into stable communication environments for pervasive social applications. The proposal is based on an evolution of the VNLayer, a virtualization layer that defined procedures for mobile devices to collaboratively emulate an infrastructure of stationary virtual nodes. We refine the VNLayer procedures and introduce new ones to increase the reliability and the responsiveness of the virtual nodes, which serves to boost the performance of routing with a virtualized version of the well-known AODV algorithm. We prove the advantages of the resulting routing scheme by means of simulation experiments and measurements on a real deployment of an application for immersive and collective learning about History in museums and their surroundings.", "keywords": "pervasive social computing;mobile ad-hoc networks;virtualization", "title": "An improved virtualization layer to support distribution of multimedia contents in pervasive social applications"} {"abstract": "In parallel adaptive applications, the computational structure of the applications changes over time, leading to load imbalances even though the initial load distributions were balanced. To restore balance and to keep communication volume low in further iterations of the applications, dynamic load balancing (repartitioning) of the changed computational structure is required. Repartitioning differs from static load balancing (partitioning) due to the additional requirement of minimizing migration cost to move data from an existing partition to a new partition. In this paper, we present a novel repartitioning hypergraph model for dynamic load balancing that accounts for both communication volume in the application and migration cost to move data, in order to minimize the overall cost. The use of a hypergraph-based model allows us to accurately model communication costs rather than approximate them with graph-based models. We show that the new model can be realized using hypergraph partitioning with fixed vertices and describe our parallel multilevel implementation within the Zoltan load balancing toolkit. To the best of our knowledge, this is the first implementation for dynamic load balancing based on hypergraph partitioning. To demonstrate the effectiveness of our approach, we conducted experiments on a Linux cluster with 1024 processors. The results show that, in terms of reducing total cost, our new model compares favorably to the graph-based dynamic load balancing approaches, and multilevel approaches improve the repartitioning quality significantly.", "keywords": "dynamic load balancing;hypergraph partitioning;parallel algorithms;scientific computing;distributed memory computers", "title": "A repartitioning hypergraph model for dynamic load balancing"} {"abstract": "Distinguishing people with identical names is becoming more and more important in Web search. This research aims to display person icons on a map to help users select person clusters that are separated into different people from the result of person searches on the Web. We propose a method to assign person clusters with one piece of location information. Our method is comprised of two processes: (a) extracting location candidates from Web pages and (b) assigning location information using a local search engine. Our main idea exploits search engine rankings and character distance to obtain good location information among location candidates. Experimental results revealed the usefulness of our proposed method. We also show a developed prototype system.", "keywords": "information extraction;location information;character distance;web people search;map interface", "title": "assigning location information to display individuals on a map for web people search results"} {"abstract": "With the ever increasing costs of manual content creation for virtual worlds, the potential of creating it automatically becomes too attractive to ignore. However, for most designers, traditional procedural content generation methods are complex and unintuitive to use, hard to control, and generated results are not easily integrated into a complete and consistent virtual world. We introduce a novel declarative modeling approach that enables designers to concentrate on stating what they want to create instead of on describing how they should model it. It aims at reducing the complexity of virtual world modeling by combining the strengths of semantics-based modeling with manual and procedural approaches. This article describes two of its main contributions to procedural modeling of virtual worlds: interactive procedural sketching and virtual world consistency maintenance. We discuss how these techniques, integrated in our modeling framework SketchaWorld, build up to enable designers to create a complete 3D virtual world in minutes. Procedural sketching provides a fast and more intuitive way to model virtual worlds, by letting designers interactively sketch their virtual world using high-level terrain features, which are then procedurally expanded using a variety of integrated procedural methods. Consistency maintenance guarantees that the semantics of all terrain features is preserved throughout the modeling process. In particular, it automatically solves conflicts possibly emerging from interactions between terrain features. We believe that these contributions together represent a significant step towards providing more user control and flexibility in procedural modeling of virtual worlds. It can therefore be expected that by further reducing its complexity, virtual world modeling will become accessible to an increasingly broad group of users.", "keywords": "virtual worlds;declarative modeling;semantic modeling;consistency maintenance;procedural methods;procedural sketching", "title": "A declarative approach to procedural modeling of virtual worlds"} {"abstract": "The dynamic dam-fluid interaction is considered via a Lagrangian approach, based on a fluid finite element 'FE, model under the assumption of small displacement and inviscid fluid. The fluid domain is discretized by enhanced displacement-based finite elements, which can be considered an evolution of those derived from the pioneering works of Bathe and Hahn [Bathe KJ, Hahn WF. On transient analysis of fluid-structure system. Comp Struct 1979;10:383-93] and of Wilson and Khalvati [Wilson EL, Khalvati M. Finite element for the dynamic analysis of fluid-solid system. Int. J Numer Methods Eng 1983;19:1657-68]. The irrotational condition for inviscid fluids is imposed by the penalty method and consequentially leads to a type of micropolar media. The model is implemented using a FE code, and the numerical results of a rectangular bidimensional basin (subjected to horizontal sinusoidal acceleration) are compared with the analytical solution. It is demonstrated that the Lagrangian model is able to perform pressure and gravity wave propagation analysis, even if the gravity (or surface) waves are dispersive. The dispersion nature of surface waves indicates that the wave propagation velocity is dependent on the wave frequency. For the practical analysis of the coupled dam-fluid problem the analysed region of the basin must be reduced and the use of suitable asymptotic boundary conditions must be investigated. The classical Sommerfeld condition is implemented by means of a boundary layer of dampers and the analysis results are shown for the cases of sinusoidal forcing. The classical Sommerfeld condition is highly efficient for pressure-based FE modelling, but may not be considered fully adequate for the displacement-based FE approach. In the present paper a high-order boundary condition proposed by Higdom [Higdom RL. Radiation boundary condition for dispersive waves. SIAM J Numer Anal 1994;31:64-100] is considered. Its implementation requires the resolution of a multifreedom constraint problem, defined in terms of incremental displacements, in the ambit of dynamic time integration problems. The first- and second-order Higdon conditions are developed and implemented. The results are compared with the Sommerfeld condition results, and with the analytical unbounded problem results. Finally, a number of finite element results are presented and their related features are discussed and critically compared. ", "keywords": "absorbing boundary;dam-fluid interaction;lagrangian finite element;dynamic analysis", "title": "Lagrangian finite element modelling of dam-fluid interaction: Accurate absorbing boundary conditions"} {"abstract": "The use of multiple independent spanning trees (ISTs) for data broadcasting in networks provides a number of advantages, including the increase of fault-tolerance and bandwidth. The designs of multiple ISTs on several classes of networks have been widely investigated. In this paper we show a construction algorithm of ISTs on odd graphs, and we analyze that all the lengths of the paths in the ISTs are less than or equal to the length of the shortest path+4, which is optimal. We also prove that the heights of the ISTs we constructed are d+1, which again is optimal, since the fault diameter of an odd graph is d+1.", "keywords": "optimal independent spanning trees;odd graphs;internally disjoint paths;algorithms", "title": "Optimal Independent Spanning Trees on Odd Graphs"} {"abstract": "The effect of single event transient (SET) on reliability has become a significant concern for digital circuits. This paper proposed an algorithm for evaluating the reliability for SET on digital circuits, based on signal probability, universal generating function technique, and generalized reliability block diagrams. The algorithm provides an expression for the reliability of SET under consideration for the effects of logic masking, error attenuation of gates, and crosstalk effects among interconnect wires. We perform simulations of ISCAS85 circuits. The results indicate that the proposed algorithm can effectively evaluate the reliability for SET on circuits. The error attenuation of gates can increase the reliability by more than 41.6%, and the masking and crosstalk effects will improve the reliability by more than 43%.", "keywords": "crosstalk effects;masking;reliability evaluation;single event transient", "title": "Reliability Evaluation for Single Event Transients on Digital Circuits"} {"abstract": "In this paper, we introduce the following problem in the theory of algorithmic self-assembly: given an input shape as the seed of a tile-based self-assembly system, design a finite tile set that can, in some sense, uniquely identify whether or not the given input shape-drawn from a very general class of shapes-matches a particular target shape. We first study the complexity of correctly identifying squares. Then we investigate the complexity associated with the identification of a considerably more general class of non-square, hole-free shapes.", "keywords": "algorithmic self-assembly;kolmogorov complexity;rnase assembly model;shape identification problem", "title": "Identifying Shapes Using Self-assembly"} {"abstract": "Traditional gait and fluoroscopy analysis of human movement are largely utilised but are still limited in registration, integration, synchronisation and visualisation capabilities. The present work exploits the features of a recently developed software tool based on multimodal display (Data Manager developed within the EU-funded project Multimod) in an exemplary clinical case. Standard lower limb gait analysis, comprising segment position, ground reaction force and EMG data collection, and three-dimensional fluoroscopy analysis at the replaced joint were performed in a total knee replacement patient while ascending stairs. Clinical information such as X-rays and standard scores were also available. Data Manager was able to import all this variety of data and to structure these in an original hierarchical tree. Bone and prosthesis component models were registered to corresponding marker position data for effective three-dimensional animations. These were also synchronised with corresponding standard video sequences. Animations, video, time-histories of collected and also processed data were shown in various combinations, according to specific interests of the bioengineering and medical professionals expected to observe and to interpret this large amount of data. This software tool demonstrated to be a valuable means to enhance representation and interpretation of measurements coming from human motion analysis. In a single software, a thorough and effective clinical and biomechanical analysis of human motion was performed.", "keywords": "gait analysis;stereophotogrammetry;fluoroscopy;total knee replacement;graphical representation;multimodal display;synchronisation;registration", "title": "Advanced multimodal visualisation of clinical gait and fluoroscopy analyses in the assessment of total knee replacement"} {"abstract": "A new adaptive angular speed (AS) estimate method minimizes AS errors. Hysteresis switching technology is applied to avoid speed fluctuations. Errors caused by the mechanical errors of optical encoder are removed. Merits of high accuracy, fast response, and less fluctuation are achieved. Two-layer hysteresis switches system is realized in DSC.", "keywords": "hysteresis switch;adaptive angular speed measurement;digital signal controller ;optical encoder", "title": "A high-performance angular speed measurement method based on adaptive hysteresis switching techniques"} {"abstract": "Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.", "keywords": "segmentation evaluation;principal component analysis;multidimensional visualization;image segmentation;mri segmentation;similarity measure", "title": "A multidimensional segmentation evaluation for medical image data"} {"abstract": "Inherent imprecision of data in many applications motivates us to support uncertainty as a first-class concept. Data stream and probabilistic data have been recently considered noticeably in isolation. However, there are many applications including sensor data management systems and object monitoring systems which need both issues in tandem. Our main contribution is designing a probabilistic data stream management system, called Sarcheshmeh,(a) for continuous querying over probabilistic data streams. Sarcheshmeh supports uncertainty from input data to final query results. In this paper, after reviewing requirements and applications of probabilistic data streams, we present our new data model for probabilistic data streams and define our main logical operators formally. Then, we present query language and physical operators. In addition, we introduce the architecture of Sarcheshmeh and also describe some major challenges like memory management and our floating precision mechanism toward designing a more robust system. Finally, we report evaluation of our system and the effect of floating precision on the tradeoff between accuracy and efficiency.", "keywords": "probabilistic data stream;probabilistic queries;uncertain sensors", "title": "PROBABILISTIC QUERYING OVER UNCERTAIN DATA STREAMS"} {"abstract": "The paper presents a comprehensive study of the failure envelope (or capacity diagram) of a single elastic pile in sand. The behavior of a pile subjected to different load combinations is simulated using a large number of finite element numerical calculations. The sand is modeled using a constitutive law based on hypoplasticity. In order to find the failure envelope in the three-dimensional space (i.e. horizontal force H, bending moment M and vertical force V), the radial displacement method and swipe tests are numerically performed. It is found that with increasing vertical load the horizontal bearing capacity of the pile decreases. Furthermore, the presence of bending moment on the pile head significantly influences the horizontal bearing capacity and the capacity diagram in the HM plane manifests an inclined elliptical shape. An analytical equation providing good agreement with the 3D numerical results is finally proposed. The formula is useful for design purposes and the development of simplified modeling numerical strategies such as macro-element.", "keywords": "3d failure envelope;capacity diagram;pile;hypoplasticity;radial displacement tests;swipe tests;analytical equation;sand", "title": "Numerical study of the 3D failure envelope of a single pile in sand"} {"abstract": "A growing number of research information systems use a semantic linkage technique to represent in explicit mode information about relationships between elements of its content. This practice is coming nowadays to a maturity when already existed data on semantically linked research objects and expressed by this scientific relationships can be recognized as a new data source for scientometric studies. Recent activities to provide scientists with tools for expressing in a form of semantic linkages their knowledge, hypotheses and opinions about relationships between available information objects also support this trend. The study presents one of such activities performed within the Socionet research information system with a special focus on (a) taxonomy of scientific relationships, which can exist between research objects, especially between research outputs; and (b) a semantic segment of a research e-infrastructure that includes a semantic interoperability support, a monitoring of changes in linkages and linked objects, notifications and a new model of scientific communication, and at lastscientometric indicators built by processing of semantic linkages data. Based on knowledge what is a semantic linkage data and how it is stored in a research information system we propose an abstract computing model of a new data source. This model helps with better understanding what new indicators can be designed for scientometric studies. Using current semantic linkages data collected in Socionet we present some statistical experiments, including examples of indicators based on two data sets: (a) what objects are linked and (b) what scientific relationships (semantics) are expressed by the linkages.", "keywords": "research information system;scientific information objects;semantic linkages;new data source;scientometric studies;", "title": "Semantic linkages in research information systems as a new data source for scientometric studies"} {"abstract": "As the capacity and speed of flash memories in form of solid state disks grow, they are becoming a practical alternative for standard magnetic drives. Currently, most solid-state disks are based on NAND technology and much faster than magnetic disks in random reads, while in random writes they are generally not. So far, large-scale LTL model checking algorithms have been designed to employ external memory optimized for magnetic disks. We propose algorithms optimized for flash memory access. In contrast to approaches relying on the delayed detection of duplicate states, in this work, we design and exploit appropriate hash functions to re-invent immediate duplicate detection. For flash memory efficient on-the-fly LTL model checking, which aims at finding any counter-example to the specified LTL property, we study hash functions adapted to the two-level hierarchy of RAM and flash memory. For flash memory efficient off-line LTL model checking, which aims at generating a minimal counterexample and scans the entire state space at least once, we analyze the effect of outsourcing a memory-based perfect hash function from RAM to flash memory. Since the characteristics of flash memories are different to magnetic hard disks, the existing I/O complexity model is no longer sufficient. Therefore, we provide an extended model for the computation of the I/O complexity adapted to flash memories that has a better fit to the observed behavior of our algorithms. ", "keywords": "model checking;external memory algorithms;algorithm engineering", "title": "Flash memory efficient LTL model checking"} {"abstract": "We consider the problem of decentralized detection in a network consisting of a large number of nodes arranged as a tree of bounded height, under the assumption of conditionally independent and identically distributed (i.i.d.) observations. We characterize the optimal error exponent under a Neyman-Pearson formulation. We show that the Type II error probability decays exponentially fast with the number of nodes, and the optimal error exponent is often the same as that corresponding to a parallel configuration. We provide sufficient, as well as necessary, conditions for this to happen. For those networks satisfying the sufficient conditions, we propose a simple strategy that nearly achieves the optimal error exponent, and in which all non-leaf nodes need only send 1-bit messages.", "keywords": "decentralized detection;error exponent;sensor networks", "title": "Data fusion trees for detection: Does architecture matter"} {"abstract": "This short paper proposes alternative rules for converting generalization/specialization hierarchies and union types, defined in the Extended Entity-Relationship model, to an XML logical model. Our approach considers all the possible constraints and constructs for generalization and union types, generating abstract schemas for the logical design of XML documents.", "keywords": "union types;generalization/specialization;eer;xml schemas", "title": "conversion of generalization hierarchies and union types from extended entity-relationship model to an xml logical model"} {"abstract": "An experiment-oriented integrated model of the regulation of the biologically ubiquitous NF-?B and AP-1 gene transcription promoters was built by extending a previously developed qualitative process system for simulating cell behavior in the immune system. The core knowledge base (KB) implemented a deep biological ontology including molecular, ultrastructural, cytological, histological, and organismic definitions. KB states, relationships, predicates, and heuristics also represented process interactions between reactive oxygen species, growth factors, and a variety of kinases phosphorylating intermediate molecules in the NF-?B and AP-1 regulatory signaling pathways. The system successfully simulated the molecular process steps underlying outcomes of eight different molecular genetics laboratory experiments, including those dealing with NF-?B and AP-1 regulation in immunodeficiency virus infection and tumor necrosis factor responses.", "keywords": "qualitative modeling;gene regulation;immunology research", "title": "A qualitative process system for modeling NF-?B and AP-1 gene regulation in immune cell biology research"} {"abstract": "We consider a type of dependent percolation introduced in [2], where it is shown that certain \"enhancements\" of independent (Bernoulli) percolation, called essential, make the percolation critical probability strictly smaller. In this study we first prove that, for two-dimensional enhancements with a natural monotonicity property, being essential is also a necessary condition to shift the critical point. We then show that (some) critical exponents and the scaling limit of crossing probabilities of a two-dimensional percolation process are unchanged if the process is subjected to a monotonic enhancement that is not essential. This proves a form of universality for all dependent percolation models obtained via a monotonic enhancement (of Bernoulli percolation) that does not shift the critical point. For the case of site percolation on the triangular lattice, we also prove a stronger form of universality by showing that the full scaling limit [12, 13] is not affected by any monotonic enhancement that does not shift the critical point. ", "keywords": "enhancement percolation;scaling limit;critical exponents;universality", "title": "Universality in two-dimensional enhancement percolation"} {"abstract": "The cornerstone of successful deployment of large scale grid systems depends on efficient resource discovery mechanisms. In this respect, this paper presents a grid information system supported by a self-structured overlay topology and proactive information caching. The proposed approach features an ant-inspired self-organized overlay construction that maintains a bounded diameter overlay, and a selective flooding based discovery algorithm that exploit local caches to reduce the number of visited nodes. The caches are periodically exchanged between neighboring nodes using an epidemic replication mechanism that is based on a gossiping algorithm, thus allowing nodes to have a more general view of the network and its resources. We conducted extensive experimentation that provides evidence that the average number of hops required to efficiently locate resources is limited and that our framework performs well with respect to hit rate and network overhead.", "keywords": "grid computing;overlay networks;collaborative ant algorithms;resource discovery", "title": "proactive information caching for efficient resource discovery in a self-structured grid"} {"abstract": "Recently bilinear pairings on elliptic curves have raised great interest in cryptographic community. Based on their good properties, many excellent ID-based cryptographic schemes have been proposed. However, in these proposed schemes, the private key generator should be assumed trusted, while in real environment, this assumption does not always hold. To overcome this weakness, in this paper, we will use the threshold technology to devise a secure ID-based signcryption scheme. Since the threshold technology is adopted not only in the master key management but also in the group signature, our scheme can achieve high security and resist some malicious attacks under a certain threshold.", "keywords": "bilinear pairings;identity-based cryptography;threshold scheme;signcryption", "title": "robust id-based threshold signcryption scheme from pairings"} {"abstract": "LR-fuzzy numbers are widely used in Fuzzy Set Theory applications based on the standard definition of convex fuzzy sets. However, in some empirical contexts such as, for example, human decision making and ratings, convex representations might not be capable to capture more complex structures in the data. Moreover, non-convexity seems to arise as a natural property in many applications based on fuzzy systems (e.g., fuzzy scales of measurement). In these contexts, the usage of standard fuzzy statistical techniques could be questionable. A possible way out consists in adopting ad-hoc data manipulation procedures to transform non-convex data into standard convex representations. However, these procedures can artificially mask relevant information carried out by the non-convexity property. To overcome this problem, in this article we introduce a novel computational definition of non-convex fuzzy number which extends the traditional definition of LR-fuzzy number. Moreover, we also present a new fuzzy regression model for crisp input/non-convex fuzzy output data based on the fuzzy least squares approach. In order to better highlight some important characteristics of the model, we applied the fuzzy regression model to some datasets characterized by convex as well as non-convex features. Finally, some critical points are outlined in the final section of the article together with suggestions about future extensions of this work.", "keywords": "non-convex fuzzy data;fuzzy linear regression;fuzzy least squares approach;fuzzy rating scales;fuzzy measurement tools", "title": "Non-convex fuzzy data and fuzzy statistics: a first descriptive approach to data analysis"} {"abstract": "Dispatching rules are often Suggested to schedule manufacturing systems in real-time. Numerous dispatching rules exist. Unfortunately no dispatching rule (DR) is known to be globally better than any other. Their efficiency depends on the characteristics of the system, operating condition parameters and the production objectives, Several authors have demonstrated the benefits of changing dynamically these rules, so as to take into account the changes that can occur in the system state, A new approach based on neural networks (NN) is proposed here to select in real time, each time a resource becomes available, the most suited DR. The selection is made in accordance with the current system state and the workshop Operating condition parameters. Contrarily to the few learning approaches presented in the literature to select scheduling heuristics, no training set is needed. The NN parameters are determined through simulation optimization. The benefits of the proposed approach are illustrated through the example of a simplified flow-shop already published. It is shown that the NN can automatically select efficient DRs dynamically: the knowledge is only generated from Simulation experiments, Which are driven by the optimization method. Once trained offline, the resulting NN can be used online, in connection with the monitoring system of a flexible manufacturing system. ", "keywords": "simulation;optimization;neural network;dynamic scheduling;learning;flow-shop", "title": "Training a neural network to select dispatching rules in real time"} {"abstract": "The design of a manufacturing layout is incomplete without consideration of aisle structure for material handling. This paper presents a method to solve the layout and aisle structure problems simultaneously by a slicing floorplan. In this representation, the slicing lines are utilised as the aisles for a material handling system. The method decomposes the problem into two stages. The first stage minimises the material handling cost with aisle distance, and the second stage optimises the aisles in the aisle structure. A representation of slicing floorplan is introduced for the optimisation by genetic algorithms (GAs). The corresponding operators of the GA are also developed. Computational tests demonstrate the goodness of the method. A comparison study of the GA and the random search (RS) for the problem was performed. It showed that the GA has a much higher efficiency than a RS, though further study is still needed to improve the efficiency of the GA. ", "keywords": "layout;slicing floorplan;material handling;genetic algorithms", "title": "The optimisation of block layout and aisle structure by a genetic algorithm"} {"abstract": "In this paper, a high accuracy stereo reconstruction method for surgery instruments positioning is proposed. Usually, the problem of surgical instruments reconstruction is considered as a basic task in computer vision to estimate the 3-D position of each marker on a surgery instrument from three pairs of image points. However, the existing methods considered the 3-D reconstruction of the points separately thus ignore the structure information. Meanwhile, the errors from light variation, imaging noise and quantization still affect the reconstruction accuracy. This paper proposes a method which takes the structure information of surgical instruments as constraints, and reconstructs the whole markers on one surgical instrument together. Firstly, we calibrate the instruments before navigation to get the structure parameters. The structure parameters consist of markers' number, distances between each markers and a linearity sign of each instrument. Then, the structure constraints are added to stereo reconstruction. Finally, weighted filter is used to reduce the jitter. Experiments conducted on surgery navigation system showed that our method not only improve accuracy effectively but also reduce the jitter of surgical instrument greatly.", "keywords": "high accuracy reconstruction;computer vision;surgical instruments positioning", "title": "Constrained High Accuracy Stereo Reconstruction Method for Surgical Instruments Positioning"} {"abstract": "The article reports on two theoretical investigations and an experimental investigation into the collapse of six circular conical shells under uniform external pressure. Four of the vessels collapsed through plastic non-symmetric bifurcation buckling and one vessel collapsed through plastic axisymmetric buckling. A sixth vessel failed in a mixed mode of plastic non-symmetric bifurcation buckling, combined with plastic axisymmetric buckling. The theoretical and experimental investigations appeared to indicate that there was a link between plastic non-symmetric bifurcation buckling and plastic axisymmetric buckling. The theoretical investigations were via the finite element method and were used to provide a design chart for these vessels.", "keywords": "conical;shells;buckling;lobar;axisymmetric;finite element;plastic;non-linear", "title": "Plastic collapse of circular conical shells under uniform external pressure"} {"abstract": "In this work, we present two new approaches based on variable neighborhood search (VNS) and ant colony optimization (ACO) for the reconstruction of cross cut shredded text documents. For quickly obtaining initial solutions, we consider four different construction heuristics. While one of them is based on the well known algorithm of Prim, another one tries to match shreds according to the similarity of their borders. Two further construction heuristics rely on the fact that in most cases the left and right edges of paper documents are blank, i.e. no text is written on them. Randomized variants of these construction heuristics are applied within the ACO. Experimental tests reveal that regarding the solution quality the proposed ACO variants perform better than the VNS approaches in most cases, while the running times needed are shorter for VNS. The high potential of these approaches for reconstructing cross cut shredded text documents is underlined by the obtained results.", "keywords": "ant colony optimization;document reconstruction;variable neighborhood search;integer linear programming", "title": "meta-heuristics for reconstructing cross cut shredded text documents"} {"abstract": "Discusses the relation between cybernetics and architecture and pays tribute to Gordon Pask's role and influence. Indicates Pask's contribution to an increasingly environmentally responsive architectural theory that may lead to a more humane and ecologically conscious environment.", "keywords": "architecture;cybernetics", "title": "The cybernetics of architecture: a tribute to the contribution of Gordon Pask"} {"abstract": "One of the challenging problems in forecasting the conditional volatility of stock market returns is that general kernel functions in support vector machine (SVM) cannot capture the cluster feature of volatility accurately. While wavelet function yields features that describe of the volatility time series both at various locations and at varying time granularities, so this paper construct a multidimensional wavelet kernel function and prove it meeting the mercer condition to address this problem. The applicability and validity of wavelet support vector machine (WSVM) for volatility forecasting are confirmed through computer simulations and experiments on real-world stock data.", "keywords": "volatility forecasting;wavelet support vector machine ;mercer condition", "title": "Forecasting volatility based on wavelet support vector machine"} {"abstract": "We model the decision making processes in decision support systems and programs as sequential information acquisition processes and compare their usefulness. A Bayesian decision maker is shown to be indifferent between the two approaches. In contrast, a decision maker with bounded rationality prefers the decision support systems approach. The model is extended to group decision support systems where the interaction between the decision support systems approach. The model is extended to group decision support systems where the interaction between the decision makers and the group facilitator is modelled as a non-cooperative economics game. We show that in some instances the group facilitator would prefer precommitment to an interaction plan rather than allow evolutionary planning of the interaction. This planning is similar to that in a program and may take the form of an organization chart.", "keywords": "decision support systems;programs;group dss;bayesian;bounded rationality;economics model;incentive conflict;noncooperative game;organization chart", "title": "THE ROLE OF USER CAPABILITY AND INCENTIVES IN GROUP AND INDIVIDUAL DECISION-SUPPORT SYSTEMS - AN ECONOMICS PERSPECTIVE"} {"abstract": "Online communities increasingly rely on reputation information to foster cooperation and deter cheating. As rational agents can often benefit from misreporting their observations, explicit incentives must be created to reward honest feedback. Reputation side-payments (e.g., agents get paid for submitting feedback) can be designed to make truth-telling optimal. In this paper, we present a new side-payment scheme adapted for settings where agents repeatedly submit feedback. We rate the feedback set of an agent, rather than individual reports. The CHI-Score of the feedback set is computed based on a Chi-square independence test that assesses the correlation between the agent's feedback and the feedback submitted by the rest of the community. The mechanism has intuitive appeal and generates significantly lower costs than existing incentive-compatible reporting mechanisms.", "keywords": "reputation mechanisms;honest reporting", "title": "using chi-scores to reward honest feedback from repeated interactions"} {"abstract": "Embedding cuts into a branch-and-cut framework is a delicate task, especially when a large set of cuts is available. In this paper we describe a separation heuristic for {10, 1/2}-cuts, a special case of Chvatal-Gomory cuts, that tends to produce many violated inequalities within relatively short time. We report computational results on a large testbed of integer linear programming (ILP) instances of combinatorial problems including satisfiability, max-satisfiability, and linear ordering problems, showing that a careful cut-selection strategy produces a considerable speedup with respect to the cases in which either the separation heuristic is not used at all, or all of the cuts it produces are added to the LP relaxation.", "keywords": "programming, integer, algorithms, cutting plane;programming, integer, applications", "title": "Embedding {0,1/2}-cuts in a branch-and-cut framework: A computational study"} {"abstract": "Two-sided assembly lines are a special type of assembly lines in which workers perform assembly tasks in both sides of the line. This type of lines is of crucial importance, especially in the assembly of large-sized products, like automobiles, buses or trucks, in which some tasks must be performed at it specific side of the product. This paper presents ail approach to address the two-sided mixed-model assembly line balancing problem. First, a mathematical programming model is presented to formally describe the problem. Then, an ant colony optimisation algorithm is proposed to solve the problem. In the proposed procedure two ants 'work' simultaneously, one at each side of the line, to build a balancing solution which verifies the precedence, zoning, capacity, side and synchronism constraints of the assembly process. The main goal is to minimise the number of workstations of the line, but additional goals are also envisaged. The proposed procedure is illustrated with a numerical example and results of a computational experience that exhibit its superior performance are presented. ", "keywords": "assembly line balancing;ant colony optimisation;two-sided assembly lines;mixed-model assembly lines", "title": "2-ANTBAL: An ant colony optimisation algorithm for balancing two-sided assembly lines"} {"abstract": "Abstract: Osteoporosis and osteopenia are frequent complications of thalassemia major (TM) and intermedia (TI). Osteoporosis was found in 23/25 patients with TI and in 115/239 patients with TM. In TM, no association was found with specific polymorphisms in candidate genes (vitamin D receptor, estrogen receptor, calcitonin receptor, and collagen type 1 alpha 1). Osteoporosis in TM female was strongly associated with primary amenorrhea (P < .0001), while in male patients with TM hypogonadism was not significantly related to BMD (P= .0001). Low BMD was also associated with cardiomiopathy (P= .01), diabetes mellitus (P= .0001), chronic hepatitis (P= .0029), and increased ALT (P= .01).", "keywords": "deferasirox ;?-thalassemia;iron chelator;pediatrics;iron overload", "title": "Evaluation of ICL670, a Once-Daily Oral Iron Chelator in a Phase III Clinical Trial of ?-Thalassemia Patients with Transfusional Iron Overload"} {"abstract": "The benefits of multisensor fusion have motivated research in this area in recent years. Redundant fusion methods are used to enhance fusion system capability and reliability. The benefits of beyond wavelets have also prompted scholars to conduct research in this field. In this paper, we propose the maximum local energy method to calculate the low-frequency coefficients of images and compare the results with those of different beyond wavelets. An image fusion step was performed as follows: first, we obtained the coefficients of two different types of images through beyond wavelet transform. Second, we selected the low-frequency coefficients by maximum local energy and obtaining the high-frequency coefficients using the sum modified Laplacian method. Finally, the fused image was obtained by performing an inverse beyond wavelet transform. In addition to human vision analysis, the images were also compared through quantitative analysis. Three types of images (multifocus, multimodal medical, and remote sensing images) were used in the experiments to compare the results among the beyond wavelets. The numerical experiments reveal that maximum local energy is a new strategy for attaining image fusion with satisfactory performance.", "keywords": "image fusion;beyond wavelet transform;maximum local energy ;sum modified laplacian ", "title": "Maximum local energy: An effective approach for multisensor image fusion in beyond wavelet transform domain"} {"abstract": "The High Level Architecture (HLA) is a standardized framework for distributed simulation that promotes reuse and interoperability of simulation components (federates). Federates are processes which communicate with each other in the simulation via the Run Time Infrastructure (RTI). When running a large scale simulation over many nodes/workstations, some may get more workload than others. To run the simulation as efficiently as possible, the workload should be uniformly distributed over the nodes. Current RTI implementations are very static, and do not allow any load balancing. Load balancing of a HLA federation can be achieved by scheduling new federates on the node with least load and migrating executing federates from a highly loaded node to a lightly loaded node. Process migration has been a topic of research for many years, but not within the context of HLA. This paper focuses on process migration within the HLA framework.", "keywords": "high level architecture;load balancing;federate migration;distributed simulation", "title": "hla federate migration"} {"abstract": "College students motivational beliefs influence their online behavior and ability to think critically. In the present study, doctoral health science students reports of motivation, as measured by the California Measure of Mental Motivation, reasoning skill, as measured by the Health Science Reasoning Test, and Web-CT records of online activity during a Web-CT-based statistics course were explored. Critical thinking skill and disposition each contributed unique variance to student grades, with age, organization disposition, and analysis skill as the strongest predictors. The youngest students, those so-called millennial age, and born after 1982, were those with the lowest critical thinking skill and dispositions, and the lowest grades in the class. Future research must take into consideration discrepancies between skill and disposition and interactions with age or cohort. At present, and contrary to popular wisdom, older students may make better online learners than younger.", "keywords": "critical thinking dispositions;critical thinking skills;health science students;online communication", "title": "Online activity, motivation, and reasoning among adult learners"} {"abstract": "This paper presents a methodology for automatic simulation and verification of pipelined microcontrollers. Using this methodology, we can generate the simulation for the instruction set architecture (ISA), abstract finite slate machine (FSM) and pipelined register transfer level design and compare the simulation results across different levels quickly. We have implemented our method in the simulation and verification of a synthesized microcontroller HT_4 using our behavioral synthesis tool.", "keywords": "functional verification;simulation;high level synthesis;microcontrollers;microprocessors", "title": "Automatic simulation and verification of pipelined microcontrollers"} {"abstract": "The areas of planning and scheduling (from the Artificial Intelligence point of view) have seen important advances thanks to application of constraint satisfaction techniques. Currently, many important real-world problems require efficient constraint handling for planning, scheduling and resource allocation to competing goal activities over time in the presence of complex state-dependent constraints. Solutions to these problems require integration of resource allocation and plan synthesis capabilities. Hence to manage such complex problems planning, scheduling and constraint satisfaction must be interrelated. This special issue on Constraint Satisfaction for Planning and Scheduling Problems compiles a selection of papers dealing with various aspects of applying constraint satisfaction techniques in planning and scheduling. The core of submitted papers was formed by the extended versions of papers presented at COPLAS'2009: ICAPS 2009 Workshop on Constraint Satisfaction Techniques for Planning and Scheduling Problems. This issue presents novel advances on planning, scheduling, constraint programming/constraint satisfaction problems (CSPs) and many other common areas that exist among them. On the whole, this issue mainly focus on managing complex problems where planning, scheduling, constraint satisfaction and search must be combined and/or interrelated, which entails an enormous potential for practical applications and future research.", "keywords": "planning;scheduling;constraint programming;search", "title": "Constraint satisfaction for planning and scheduling problems"} {"abstract": "Induction (or transformation) by bipartite graphs is one of the most important operations on matroids, and it is well known that the induction of a matroid by a bipartite graph is again a matroid. As an abstract form of this fact, the induction of a matroid by a linking system is known to be a matroid. M-convex functions are quantitative extensions of matroidal structures, and they are known as discrete convex functions. As with matroids, it is known that the induction of an M-convex function by networks generates an M-convex function. As an abstract form of this fact, this paper shows that the induction of an M-convex function by linking systems generates an M-convex function. Furthermore, we show that this result also holds for M-convex functions on constant-parity jump systems. Previously known operations such as aggregation, splitting, and induction by networks can be understood as special cases of this construction.", "keywords": "m-convex function;jump system;linking system", "title": "Induction of M-convex functions by linking systems"} {"abstract": "In this paper we introduce a new scheduling model with learning effects in which the actual processing time of a job is a function of the total normal processing times of the jobs already processed and of the jobs scheduled position. We show that the single-machine problems to minimize makespan and total completion time are polynomially solvable. In addition, we show that the problems to minimize total weighted completion time and maximum lateness are polynomially solvable under certain agreeable conditions. Finally, we present polynomial-time optimal solutions for some special cases of the m-machine flowshop problems to minimize makespan and total completion time.", "keywords": "scheduling;learning effect;single-machine;flowshop", "title": "Some scheduling problems with sum-of-processing-times-based and job-position-based learning effects"} {"abstract": "We perceive programs as single-pass instruction sequences. A single-pass instruction sequence under execution is considered to produce a behaviour to be controlled by some execution environment. Threads as considered in basic thread algebra model such behaviours. We show that all regular threads, i.e. threads that can only be in a finite number of states, can be produced by single-pass instruction sequences without jump instructions if use can be made of Boolean registers. We also show that, in the case where goto instructions are used instead of jump instructions, a bound to the number of labels restricts the expressiveness.", "keywords": "single-pass instruction sequence;regular thread;expressiveness;jump-free instruction sequence", "title": "On the Expressiveness of Single-Pass Instruction Sequences"} {"abstract": "in this paper, adaptive QoS provisioning schemes are proposed for CDMA wireless networks. The proposed schemes consist of two stages. in the first stage, a call admission control (CAC) and a bandwidth allocation schemes are proposed to determine the bandwidth assignment for new connections according to available bandwidth and adaptable range of active connections. The proposed methods utilize the Markov chain model to estimate the required bandwidth for all connections. This Markov chain model considers the traffic characteristics of both the incoming call request and the existing connections, and estimates the resulting quality of service (QoS). In the second stage, Net-CBK and Net-Share schemes are presented to regulate the active connections according to call blocking rates and unused bandwidth. From the simulation results, the proposed schemes are able to carry more connections and improve system utilization.", "keywords": "code division multiple access;quality of service;variable bit rate;call admission control;packet lost rate", "title": "Adaptive two-stage QoS provisioning schemes for CDMA networks"} {"abstract": "We proposed optimal switching method for transferring multiple fingertip forces. Based on the proposed method, a haptic training system was developed. The proposed method enhanced the training effect compared with our earlier method.", "keywords": "virtual reality;humancomputer interaction;haptic interface;skill transfer;fine motor skill in fingers", "title": "A fine motor skill training system using multi-fingered haptic interface robot"} {"abstract": "This paper deals with interaction between a bubble and fluid around it, visualized by a moving object flow image analyzer (MOFIA) consisting of a three-dimensional (3D) moving object image analyzer (MOIA) and two-dimensional particle image velocimetry (PIV). The experiments were carried out for rising bubbles of various sizes and shapes in stagnant water in a vertical pipe. In the MOFIA employed, 3D-MOIA was used to measure bubble motion and PIV to measure fluid flow. The 3D position and shape of a bubble and the velocity field were measured simultaneously. The experimental results showed that the interaction was characterized by the shape, size and density of a bubble. Concretely, they showed the characteristics of bubble motion, wake shedding, and flow field.", "keywords": "bubble motion;trajectory;flow field;piv;3d-moia;mofia", "title": "Visualization of Bubble-Fluid Interaction by a Moving Object Flow Image Analyzer System"} {"abstract": "The Fibonacci (p p , r r )-cube is an interconnection topology, which unifies a wide range of connection topologies, such as hypercube, Fibonacci cube, postal network, etc. It is known that the Fibonacci cubes are median graphs [S. Klavar, On median nature and enumerative properties of Fibonacci-like cubes, Discrete Math. 299 (2005) 145153]. The question for determining which Fibonacci (p p , r r )-cubes are median graphs is solved completely in this paper. We show that Fibonacci (p p , r r )-cubes are median graphs if and only if either r?p r ? p and r?2 r ? 2 , or p=1 p = 1 and r=n r = n .", "keywords": "hypercube;fibonacci -cube;median graph", "title": "Fibonacci (p p , r r )-cubes which are median graphs"} {"abstract": "We present an alternative isogeometric BEM for elasticity on NURBS patches. Boundary data and geometry are approximated independently which avoids redundancies. Hierarchical matrices provide almost linear computational complexity. Comparison of different Ansatz functions on NURBS patches. The results show optimal convergence for all tested orders.", "keywords": "subparametric formulation;isogeometric analysis;hierarchical matrices;elasticity;nurbs;convergence", "title": "Fast isogeometric boundary element method based on independent field approximation"} {"abstract": "Shape estimation and object reconstruction are common problems in image analysis. Mathematically, viewing objects in the image plane as random sets reduces the problem of shape estimation to inference about sets. Currently existing definitions of the expected set rely on different criteria to construct the expectation. This paper introduces new definitions of the expected set and the expected boundary, based on oriented distance functions. The proposed expectations have a number of attractive properties, including inclusion relations, convexity preservation and equivariance with respect to rigid motions. The paper introduces a special class of decomposable oriented distance functions for parametric sets and gives the definition and properties of decomposable random closed sets. Further, the definitions of the empirical mean set and the empirical mean boundary are proposed and empirical evidence of the consistency of the boundary estimator is presented. In addition, the paper discusses loss functions for set inference in frequentist framework and shows how some of the existing expectations arise naturally as optimal estimators. The proposed definitions are illustrated on theoretical examples and real data.", "keywords": "random closed sets;expected set;oriented distance function;set inference;loss functions;boundary estimator;boundary reconstruction", "title": "Expectations of Random Sets and Their Boundaries Using Oriented Distance Functions"} {"abstract": "In order to compare two independent proportions (p(1) and p(2)) there are several useful tests for the parameter d = p(2) - p(1) : H(SG) : d delta, H(SG2) : d = delta vs. K(SG2) : d not equal delta (where -1 Delta (where Delta >= 0) and H(PE) : vertical bar d vertical bar >= Delta vs. K(PE) : vertical bar d vertical bar 0). The exact unconditional test requires an ordering statistic, which is usually the z-pooled statistic, to be defined. The paper gives the definition of 10 new ordering statistics with a similar computational time, and compares the number of points which each introduces into the critical region obtained to error alpha = 5%. The article reaches the conclusion that the most generally powerful statistics are: the z-pooled one with a small continuity correction (c = 1/N if n(1) not equal n(2) or c = 2/N if n(1) = n(2), where N = {n(1)+1} {n(2) +1} and n(i) are the sample sizes) and those z-pooled with Yates' continuity correction (c = {n(1) +n(2)}/{2n(1)n(2)}). In this paper the author also showed that Barnard's two classic convexity conditions are redundant, because when one of them is verified the other is also verified. The programs for these tests may be obtained free of charge from the site http://www.ugr.es/local/bioest/software.htm.", "keywords": "difference of proportions;equivalence of proportions;exact confidence intervals;exact tests;non-inferiority;power;unconditional tests;2 x 2 tables", "title": "A numerical comparison of several unconditional exact tests in problems of equivalence based on the difference of proportions"} {"abstract": "The minimum energy control problem for positive continuous-time linear systems with bounded inputs is formulated and solved. Sufficient conditions for the existence of a solution to the problem are established. A procedure for solving the problem is proposed and illustrated with a numerical example.", "keywords": "positive system;continuous time;minimum energy control;bounded inputs", "title": "MINIMUM ENERGY CONTROL OF POSITIVE CONTINUOUS-TIME LINEAR SYSTEMS WITH BOUNDED INPUTS"} {"abstract": "Wu and Huang (Advances in Computer Games, pp. 180-194, 2006) presented a new family of k-in-a-row games, among which Connect6 (a kind of six-in-a-row) attracted much attention. For Connect6 as well as the family of k-in-a-row games, this paper proposes a new threat-based proof search method, named relevance-zone-oriented proof (RZOP) search, developed from the lambda search proposed by Thomsen (Int. Comput. Games Assoc. J., vol. 23, no. 4, pp. 203-217, 2000). The proposed RZOP search is a novel, general, and elegant method of constructing and promoting relevance zones. Using this method together with a proof number search, this paper solved effectively and successfully many new Connect6 game positions, including several Connect6 openings, especially the Mickey Mouse opening, which used to be one of the popular openings before we solved it.", "keywords": "board games;connect6;k-in-a-row games;lambda search;threat-based proof search;threat-space search", "title": "Relevance-Zone-Oriented Proof Search for Connect6"} {"abstract": "The aim of this paper is to analyze the performance of a large number of long-lived TCP controlled flows sharing many routers (or links), from the knowledge of the network parameters (capacity, buffer size, topology) and of the characteristics of each TCP flow (RTT, route etc.) when taking synchronization into account. It is shown that in the small buffer case, the dynamics of such a network can be described in terms of iterate of random piecewise affine maps, or geometrically as a billiards in the Euclidean space with as many dimensions as the number of flow classes and as many reflection facets as there are routers. This class of billiards exhibits both periodic and nonperiodic asymptotic oscillations, the characteristics of which are extremely sensitive to the parameters of the network. It is also shown that for large populations and in the presence of synchronization, aggregated throughputs exhibit fluctuations that are due to the network as a whole, that follow some complex fractal patterns, and that come on top of other and more classical flow or packet level fluctuations. The consequences on TCP's fairness are exemplified on a few typical cases of small dimension.", "keywords": "bandwidth sharing;dynamical system;flow level modeling;ip traffic;product of random matrices;tcp fairness", "title": "Interaction of TCP flows as billiards"} {"abstract": "We propose a novel feature selection filter for supervised learning, which relies on the efficient estimation of the mutual information between a high-dimensional set of features and the classes. We bypass the estimation of the probability density function with the aid of the entropic-graphs approximation of Rnyi entropy, and the subsequent approximation of the Shannon entropy. Thus, the complexity does not depend on the number of dimensions but on the number of patterns/samples, and the curse of dimensionality is circumvented. We show that it is then possible to outperform algorithms which individually rank features, as well as a greedy algorithm based on the maximal relevance and minimal redundancy criterion. We successfully test our method both in the contexts of image classification and microarray data classification. For most of the tested data sets, we obtain better classification results than those reported in the literature.", "keywords": "filter feature selection;mutual information;entropic spanning graphs;microarray", "title": "Feature selection, mutual information, and the classification of high-dimensional patterns"} {"abstract": "Sensor networks are finding significant applications in large scale distributed systems. One of the basic operations in sensor networks is in-network aggregation. Among the various approaches to in-network aggregation, such as gossip and tree, including the hash-based techniques, the tree-based approaches have better performance and energy-saving characteristics. However, sensor networks are highly prone to failures. Numerous techniques suggested in the literature to counteract the effect Of failures have not been carefully analyzed. In this paper, we focus on the performance of these tree-based aggregation techniques in the presence of failures. First, we identify a fault model that Captures the important failure traits of the system. Then, we analyze the correctness of simple tree aggregation with Our fault model. We then use the same fault model to analyze the techniques that utilize redundant trees to improve the variance. The impact of techniques for maintaining the correctness under faults, Such as rebuilding or locally fixing the tree, is then studied under the same fault model. We also do the cost-benefit analysis Of using the hash-based schemes which are based on FM sketches, We conclude that these fault tolerance techniques for tree aggregation do not necessarily result in substantial improvement in fault tolerance. ", "keywords": "fault tolerance;reliability;sensor network;aggregation;modeling faults", "title": "Analyzing the techniques that improve fault tolerance of aggregation trees in sensor networks"} {"abstract": "Governments from all over the world are looking for ways to reduce costs while at the same time to stimulate innovation. While pursuing both objectives, governments face a major challengeto operate in a connected environment, engage stakeholders and solve societal problems by utilizing new methods, tools, practices and governance models. As result, fundamental changes are taking place on how government operates. Such changes are under the larger umbrella of lean government (l-Government). Lean government is a new wave which is appearing as a response to traditional approacheslike electronic government (e-Government) and transformational government (t-Government), and aims at reducing the complexity of the public sector by simplifying and streamlining organizational structures and processes, at the same time at stimulating innovation by mobilizing stakeholders. In l-Government, public organizations introduce platforms facilitating innovation and interactions with other public organizations, business and citizens, and focus on their orchestration role. Experimentation, assessment and gradual improvement based on user requirements are key factors for realizing l-Government.", "keywords": "e-government;open government;public sector reform;platform;government as a platform;infrastructure;orchestration", "title": "Lean government and platform-based governanceDoing more with less"} {"abstract": "Alias resolution, the task of identifying IP addresses belonging to the same router, is an important step in building traceroute-based Internet topology maps. Inaccuracies in alias resolution affect the representativeness of constructed topology maps. This in turn affects the conclusions derived from studies that use these maps. This paper presents two complementary studies on alias resolution. First, we present an experimental study to demonstrate the impact of alias resolution on topology measurement studies. Then, we introduce an alias resolution approach called analytic and probe-based alias resolver (APAR). APAR consists of an analytical component and a probe-based component. Given a set of path traces, the analytical component utilizes the common IP address assignment scheme to infer IP aliases. The probe-based component introduces a minimal probing overhead to improve the accuracy of APAR. Compared to the existing state-of-the-art tool ally, APAR uses an orthogonal approach to resolve a large number of IP aliases that ally fails to identify. Our extensive verification study on sample data sets shows that our approach is effective in resolving many aliases with good accuracy. Our evaluations also indicate that the two approaches (ally and APAR) should be used together to maximize the success of the alias resolution process.", "keywords": "alias resolution;internet topology;network measurement", "title": "Resolving IP Aliases in Building Traceroute-Based Internet Maps"} {"abstract": "Recommender systems are widely deployed to provide user purchasing suggestion on eCommerce websites. The technology that has been adopted by most recommender systems is collaborative filtering. However, with the open nature of collaborative filtering recommender systems, they suffer significant vulnerabilities from being attacked by malicious raters, who inject profiles consisting of biased ratings. In recent years, several attack detection algorithms have been proposed to handle the issue. Unfortunately, their applications are restricted by various constraints. PCA-based methods while having good performance on paper, still suffer from missing values that plague most user-item matrixes. Classification-based methods require balanced numbers of attacks and normal profiles to train the classifiers. The detector based on SPC (Statistical Process Control) assumes that the rating probability distribution for each item is known in advance. In this research, Beta-Protection ( beta P) is proposed to alleviate the problem without the abovementioned constraints. beta P grounds its theoretical foundation on Beta distribution for easy computation and has stable performance when experimenting with data derived from the public websites of MovieLens. ", "keywords": "shilling attacks detection;collaborative filtering;recommender systems", "title": "beta P: A novel approach to filter out malicious rating profiles from recommender systems"} {"abstract": "Lifetime of node-to-node communication in a wireless ad hoc network is defined as the duration that two nodes can communicate with each other. Failure of the two nodes or failure of the last available route between them ends their communication. In this paper, we analyze the maximum lifetime of node-to-node communication in static ad hoc networks when alternative routes that keep the two nodes connected to each other are node-disjoint. We target ad hoc networks with random topology modeled as a random geometric graph. The analysis is provided for (1) networks that support automatic repeat request (ARQ) at the medium access control level and (2) networks that do not support ARQ. On the basis of this analysis, we propose numerical algorithms to predict at each moment of network operation, the maximum duration that two nodes can still communicate with each other. Then, we derive a closed-form expression for the expected value of maximum node-to-node communication lifetime in the network. As a byproduct of our analysis, we also derive upper and lower bounds on the lifetime of node-disjoint routes in static ad hoc networks. We verify the accuracy of our analysis using extensive simulation studies. ", "keywords": "node-to-node communication lifetime;node-disjoint routes;route lifetime;network connectivity;ad hoc networks", "title": "On the lifetime of node-to-node communication in wireless ad hoc networks"} {"abstract": "Some researchers have found that unionized firms are less likely to pursue automation because high wage demands deprive them of the necessary capital required to invest in advanced manufacturing technology (AMT). it has also been suggested that stringent work rules and technology agreements can make the substitution of new technology for union labor too expensive. Others have found, however, that the pursuit of high wage policies and the resultant requirement for improved worker and machine productivity can create a positive environment for technological change. This exploratory study examines the relationships between firm-level union status and the adoption and performance of AMT in the discrete parts durable-goads manufacturing industry. Analyses of our sample, which included Chi-square tests, t-tests, correlation analyses and multiple linear regression analyses. revealed a union effect an the adoption of just-in-time technology and a moderately positive union effect on performance. Results of analyses of the impact of union status, firm sire and several human factor variables on firm performance are also presented and discussed.", "keywords": "industrial relations;trade unions;advanced manufacturing technologies;implementation", "title": "Human factors in the adoption and performance of advanced manufacturing technology in unionized firms"} {"abstract": "The increasing growth of data on protein-protein interaction (PPI) networks has boosted research on their comparative analysis. In particular, recent studies proposed models and algorithms for performing network alignment, that is, the comparison of networks across species for discovering conserved functional complexes. In this paper, we present an algorithm for dividing PPI networks, prior to their alignment, into small sub-graphs that are likely to cover conserved complexes. This allows one to perform network alignment in a modular fashion, by acting on pairs of resulting small sub-graphs from different species. The proposed dividing algorithm combines a graph-theoretical property (articulation) with a biological one (orthology). Extensive experiments on various PPI networks are conducted in order to assess how well the sub-graphs generated by this dividing algorithm cover protein functional complexes and whether the proposed pre-processing step can be used for enhancing the performance of network alignment algorithms. Source code of the dividing algorithm is available upon request for academic use. ", "keywords": "protein interaction network division;modular network alignment;conserved protein complexes", "title": "Dividing protein interaction networks for modular network comparative analysis"} {"abstract": "Many model-based clustering methods are based on a finite Gaussian mixture model. The Gaussian mixture model implies that the data scatter within each group is elliptically shaped. Hence non-elliptical groups are often modeled by more than one component, resulting in model over-fitting. An alternative is to use a meanvariance mixture of multivariate normal distributions with an inverse Gaussian mixing distribution (MNIG) in place of the Gaussian distribution, to yield a more flexible family of distributions. Under this model the component distributions may be skewed and have fatter tails than the Gaussian distribution. The MNIG based approach is extended to include a broad range of eigendecomposed covariance structures. Furthermore, MNIG models where the other distributional parameters are constrained is considered. The Bayesian Information Criterion is used to identify the optimal model and number of mixture components. The method is demonstrated on three sample data sets and a novel variation on the univariate KolmogorovSmirnov test is used to assess goodness of fit.", "keywords": "model-based clustering;multivariate normal inverse gaussian distribution;mclust;information metrics;kolmogorovsmirnov goodness of fit", "title": "Clustering with the multivariate normal inverse Gaussian distribution"} {"abstract": "Multiuser receivers improve the performance of spread-spectrum and antenna-array systems by exploiting the structure of the multiaccess interference when demodulating the signal of a user. Much of the previous work on the performance analysis of multiuser receivers has focused on their ability to reject worst case interference. Their performance in a power-controlled network and the resulting user capacity are less well-understood. In this paper, me show that in a large system with each user using random spreading sequences, the limiting interference effects under several linear multiuser receivers can be decoupled, such that each interferer can be ascribed a level of effective interference that it provides to the user to be demodulated, Applying these results to the uplink of a single power-controlled cell, we derive an effective bandwidth characterization of the user capacity: the signal-to-interference requirements of all the users can be met if and only if the sum of the effective bandwidths of the users is less than the total number of degrees of freedom in the system. The effective bandwidth of a user depends only on its own SIR requirement, and simple expressions are derived for three linear receivers: the conventional matched filter, the decorrelator, and the MMSE receiver. The effective bandwidths under the three receivers serve as a basis for performance comparison.", "keywords": "decorrelator;effective bandwidth;effective interference;mmse receiver;multiuser detection;power control;user capacity;random spreading sequences", "title": "Linear multiuser receivers: Effective interference, effective bandwidth and user capacity"} {"abstract": "Given a graph G and a vertex subset S of V(G) V ( G ) , the broadcasting time with respect to S , denoted by b(G,S) b ( G , S ) , is the minimum broadcasting time when using S as the broadcasting set. And the k-broadcasting number , denoted by bk(G) b k ( G ) , is defined by bk(G)=min{b(G,S)|S?V(G),|S|=k} b k ( G ) = min { b ( G , S ) | S ? V ( G ) , | S | = k } . Given a graph G and two vertex subsets S , S? S ? of V(G) V ( G ) , define d ( v , S ) = min u ? S d ( v , u ) , d(S,S?)=min{d(u,v)|u?S d ( S , S ? ) = min { d ( u , v ) | u ? S , v?S?} v ? S ? } , and d ( G , S ) = max v ? V ( G ) d ( v , S ) for all v?V(G) v ? V ( G ) . For all k , 1?k?|V(G)| 1 ? k ? | V ( G ) | , the k-radius of G , denoted by rk(G) r k ( G ) , is defined as rk(G)=min{d(G,S)|S?V(G) r k ( G ) = min { d ( G , S ) | S ? V ( G ) , |S|=k} | S | = k } . In this paper, we study the relation between the k-radius and the k-broadcasting numbers of graphs. We also give the 2 -radius and the 2 -broadcasting numbers of the grid graphs, and the k-broadcasting numbers of the complete n-partite graphs and the hypercubes.", "keywords": "k k-radius;k k-broadcasting number;cartesian product;path;hypercube", "title": "The multiple originator broadcasting problem in graphs"} {"abstract": "The use of rules in a distributed environment creates new challenges for the development of active rule execution models. In particular, since a single event can trigger multiple rules that execute over distributed sources of data, it is important to make use of concurrent rule execution whenever possible. This paper presents the details of the integration rule scheduling (IRS) algorithm. Integration rules are active database rules that are used for component integration in a distributed environment. The IRS algorithm identifies rule conflicts for multiple rules triggered by the same event through static, compile-time analysis of the read and write sets of each rule. A unique aspect of the algorithm is that the conflict analysis includes the effects of nested rule execution that occurs as a result of using an execution model with an immediate coupling mode. The algorithm therefore identifies conflicts that may occur as a result of the concurrent execution of different rule triggering sequences. The rules are then formed into a priority graph before execution, defining the order in which rules triggered by the same event should be processed. Rules with the same priority can be executed concurrently. The IRS algorithm guarantees confluence in the final state of the rule execution. The IRS algorithm is applicable for rule scheduling in both distributed and centralized rule execution environments.", "keywords": "active rules;concurrent rule execution;rule scheduling algorithm;confluence analysis", "title": "A concurrent rule scheduling algorithm for active rules"} {"abstract": "We propose a novel approach for the identification of human implicit visual search intention based on eye movement patterns and pupillary analysis, in general, as well as pupil size, gradient of pupil size variation, fixation length and fixation count corresponding to areas of interest, and fixation count corresponding to non-areas of interest, in particular. The proposed model identifies human implicit visual search intention as task-free visual browsing or task-oriented visual search. Task-oriented visual search is further identified as task-oriented visual search intent generation, task-oriented visual search intent maintenance, or task-oriented visual search intent disappearance. During a visual search, measurement of the pupillary response is greatly influenced by external factors such the intensity and size of the visual stimulus. To alleviate the effects of external factors, we propose a robust baseline model that can accurately measure the pupillary response. Graphical representation of the measured parameter values shows significant differences among the different intent conditions, which can then be used as features for identification. By using the eye movement patterns and pupillary analysis, we can detect the transitions between different implicit intentionstask-free visual browsing intent to task-oriented visual search intent and task-oriented visual search intent maintenance to task-oriented visual search intent disappearanceusing a hierarchical support vector machine. In the proposed model, the hierarchical support vector machine is able to identify the transitions between different intent conditions with greater than 90% accuracy.", "keywords": "implicit intention detection;task-free visual browsing intent;task-oriented visual search intent;intention recognition;human computer interface & interaction;pupillary analysis;eye tracking;pupil dilation", "title": "Identification of human implicit visual search intention based on eye movement and pupillary analysis"} {"abstract": "This paper presents a methodology and an associated technology to create context-specific usability guidelines. The objective is to transform usability guidelines into a proactive resource that software developers can employ early and often in the development process. The methodology ensures conformance with established guidelines, but has the flexibility to use design experiences to adapt the guidelines to meet the emergent and diverse requirements of modern user interface design. Case-based and organizational learning technology is used to support the methodology and provide valuable resources for software developers. ", "keywords": "guidelines;human-computer interaction;software development methodologies;software engineering;standards;tools for working with guidelines;usability", "title": "A methodology and tools for applying context-specific usability guidelines to interface design"} {"abstract": "The IEC model for distributed control systems (DCSs) was adopted for the implementation of a new generation engineering tool. However, it was found that this approach does not exploit all the benefits of the object and component technologies. In this paper, we present the enhanced 4-layer architecture that proved to be very helpful in the identification of the key abstractions required for the design of the new generation of function block based engineering tools. Despite being IEC-compliant, the proposed approach introduces a number of extensions and modifications to the IEC-model to improve the development process. The Unified Modelling Language is exploited during the requirements phase of DCSs, but the use of the FB construct is confected during the design phase.", "keywords": "corfu ess;corfu fbdk;iec61499;case tool;distributed control systems;engineering tool;function block", "title": "towards an engineering tool for implementing reusable distributed control systems"} {"abstract": "Cooperative communication has great potential to improve the wireless channel capacity by exploiting the antennas on wireless devices for spatial diversity. The performance in cooperative communication depends on careful resource allocation such as relay selection and power control. In this paper, the network is expanded and more than one source is used. What is proposed is a distributed buyer/seller game theoretic framework over multiuser cooperative communication networks in order to stimulate cooperation and improve the system performance. A two-level Stackelberg game is employed to jointly consider the benefits of the source node and the relay nodes in which the source node is modeled as a buyer and the relay nodes are modeled as sellers, respectively. In this work we proposed coded method in which relays amplify and code Source data and send it to destination at the same time and then signal detection occur in destination, but in the codeless network relays send source data separately to destination. So, here coded and codeless networks are compared and contrasted. The stimulation results revealed that the proposed coded method performed better than the codeless ones; furthermore, the research shows that relays near the sources can play a significant role in increasing source nodes utility, so every source would like to buy more power from these preferred relays. Also, the proposed algorithm enforces truthful power demands.", "keywords": "cooperative communication;power control;stackelberg game;game theory", "title": "Power Allocation in Cooperative Communication System Based on Stackelberg Game"} {"abstract": "During activated states in vivo, neocortical neurons are subject to intense synaptic activity and high-amplitude membrane potential (Vm) ( V m ) fluctuations. These high-conductance states may strongly affect the integrative properties of cortical neurons. We investigated the responsiveness of cortical neurons during different states using a combination of computational models and in vitro experiments (dynamic-clamp) in the visual cortex of adult guinea pigs. Spike responses were monitored following stochastic conductance injection in both experiments and models. We found that cortical neurons can operate in a continuum between two different modes: during states with equal excitatory and inhibitory conductances, the firing is mostly correlated with an increase in excitatory conductance, which is a rather classic scenario. In contrast, during states dominated by inhibition, the firing is mostly related to a decrease in inhibitory conductances (dis-inhibition). This model prediction was tested experimentally using dynamic-clamp, and the same modes of firing were identified. We also found that the signature of spikes evoked by dis-inhibition is a transient drop of the total membrane conductance prior to the spike, which is typical of states with dominant inhibitory conductances. Such a drop should be identifiable from intracellular recordings in vivo, which would provide an important test for the presence of inhibition-dominated states. In conclusion, we show that in artificial activated states, not only inhibition can determine the conductance state of the membrane, but inhibitory inputs may also have a determinant influence on spiking. Future analyses and models should focus on verifying if such a determinant influence of inhibitory conductance dynamics is also present in vivo.", "keywords": "spike-triggered average;conductance dynamics;dynamic-clamp", "title": "Inhibitory conductance dynamics in cortical neurons during activated states"} {"abstract": "In this article, we present a centralized fleet management system (CFMS) for cybernetic vehicles called cybercars. Cybercars are automatically guided vehicles for passenger transport on dedicated networks like amusement parks, shopping centres etc. The users make reservations for the vehicles through phone, internet, kiosk etc and the CFMS schedules the cybercars to pick the users at their respective stations at desired time intervals. The CFMS has centralized control of the routing network and performs pooling of customer requests, scheduling and routing of cybercars to customers, empty cybercars to new services or parking stations and those running below their threshold battery levels to recharging stations. The challenges before CFMS are to assure conflict-free routing, accommodate immediate requests from customers, dynamic updation of vehicle paths and minimize congestion on the whole network. We present the approaches used by CFMS to ensure these functionalities and demonstrate a numerical illustration on a test network.", "keywords": "fleet management;intelligent vehicle;decision support", "title": "Centralized fleet management system for cybernetic transportation"} {"abstract": "This article presents an educational tool to be used in signal processing interpolation-related subjects. The aim is to contribute to the better consolidation of acquired theoretical knowledge, allowing students to test signal reconstruction algorithms and visualize the results obtained by the usage of such algorithms, and how several parameters affect their convergence and performance. ", "keywords": "signal;reconstruction;educational;interpolation;over-sampling", "title": "Signal processing interpolation educational workbench"} {"abstract": "Videos play an ever increasing role in our everyday lives with applications ranging from news, entertainment, scientific research, security and surveillance. Coupled with the fact that cameras and storage media are becoming less expensive, it has resulted in people producing more video content than ever before. This necessitates the development of efficient indexing and retrieval algorithms for video data. Most state-of-the-art techniques index videos according to the global content in the scene such as color, texture, brightness, etc. In this paper, we discuss the problem of activity-based indexing of videos. To address the problem, first we describe activities as a cascade of dynamical systems which significantly enhances the expressive power of the model while retaining many of the computational advantages of using dynamical models. Second, we also derive methods to incorporate view and rate-invariance into these models so that similar actions are clustered together irrespective of the viewpoint or the rate of execution of the activity. We also derive algorithms to learn the model parameters from a video stream and demonstrate how a single video sequence may be clustered into different clusters where each cluster represents an activity. Experimental results for five different databases show that the clusters found by the algorithm correspond to semantically meaningful activities.", "keywords": "video clustering;summarization;surveillance;cascade of linear dynamical systems;view invariance;affine invariance;rate invariance", "title": "Unsupervised view and rate invariant clustering of video sequences"} {"abstract": "Inter-sequence pattern mining can find associations across several sequences in a sequence database, which can discover both a sequential pattern within a transaction and sequential patterns across several different transactions. However, inter-sequence pattern mining algorithms usually generate a large number of recurrent frequent patterns. We have observed mining closed inter-sequence patterns instead of frequent ones can lead to a more compact yet complete result set. Therefore, in this paper, we propose a model of closed inter-sequence pattern mining and an efficient algorithm called CISP-Miner for mining such patterns, which enumerates closed inter-sequence patterns recursively along a search tree in a depth-first search manner. In addition, several effective pruning strategies and closure checking schemes are designed to reduce the search space and thus accelerate the algorithm. Our experiment results demonstrate that the proposed CISP-Miner algorithm is very efficient and outperforms a compared EISP-Miner algorithm in most cases.", "keywords": "closed patterns;data mining;inter-sequence pattern", "title": "Closed inter-sequence pattern mining"} {"abstract": "This paper presents a unified method for detecting both reflection-symmetry and rotation-symmetry of 2D images based on an identical set of features, i.e., the first three nonzero generalized complex (GC) moments. This method is theoretically guaranteed to detect all the axes of symmetries of every 2D image, if more nonzero GC moments are used in the feature set. Furthermore, we establish the relationship between reflectional symmetry and rotational symmetry in an image, which can be used to check the correctness of symmetry detection. This method has been demonstrated experimentally using more than 200 images.", "keywords": "symmetry detection;reflectional and rotational symmetry;symmetric axis;generalized complex moments;fold number;fold axes;rotationally symmetric image;reflection-symmetric image", "title": "Symmetry detection by generalized complex (GC) moments: A close-form solution"} {"abstract": "The Fourier series of trigonometric-exponential functions f(alpha)(x) = sin(x + alpha) exp(cos(x)) are studied. Dual recursive formulae for the corresponding Fourier coefficients are derived. The above coefficients are transcendental. ", "keywords": "fourier series and coefficients;wave function;transcendental number", "title": "On \"drunken sinusoids\" and their Fourier series"} {"abstract": "Pre/postcondition-based specifications are commonplace in a variety of software engineering activities that range from requirements through to design and implementation. The fragmented nature of these specifications can hinder validation as it is difficult to understand if the specifications for the various operations fit together well. In this paper, we propose a novel technique for automatically constructing abstractions in the form of behavior models from pre/postcondition-based specifications. Abstraction techniques have been used successfully for addressing the complexity of formal artifacts in software engineering; however, the focus has been, up to now, on abstractions for verification. Our aim is abstraction for validation and hence, different and novel trade-offs between precision and tractability are required. More specifically, in this paper, we define and study enabledness-preserving abstractions, that is, models in which concrete states are grouped according to the set of operations that they enable. The abstraction results in a finite model that is intuitive to validate and which facilitates tracing back to the specification for debugging. The paper also reports on the application of the approach to two industrial strength protocol specifications in which concerns were identified.", "keywords": "requirements/specifications;validation;automated abstraction", "title": "Automated Abstractions for Contract Validation"} {"abstract": "This paper investigates a robust networked control for a class of Takagi-Sugeno (T-S) fuzzy systems. The controller design specifically takes probabilistic interval distribution of the communication delay into account. A general framework of networked control is first proposed. The two main features are 1) the zero-order hold can choose the latest control input signal when the packets received are out-of-order, and 2) as the result of 1), the models of the all kinds of uncertainties in networked signal transfer-including network-induced delay and data packet dropout-are under a unified framework. Next, if the probability distribution of communication delay is known or specified in a design process, sufficient stability conditions for networked T-S fuzzy systems are derived, which are based on the Lyapunov theory. Following this, a stabilizing controller design method is developed, which shows that the solvability of the design depends not only on the upper and lower bounds of the delay but on its probability distribution as well. Finally, a numerical example is used to show the application of the theoretical results obtained in this paper.", "keywords": "linear matrix inequalities ;networked control systems ;probability distribution;takagi-sugeno fuzzy systems", "title": "Communication-Delay-Distribution-Dependent Networked Control for a Class of T-S Fuzzy Systems"} {"abstract": "We generalize the construction of connected branched polymers and the notion of the volume of the space of connected branched polymers studied by Brydges and Imbrie (Ann Math, 158:1019-1039, 2003), and Kenyon and Winkler (Am Math Mon, 116(7):612-628, 2009) to any central hyperplane arrangement . The volume of the resulting configuration space of connected branched polymers associated to the hyperplane arrangement is expressed through the value of the characteristic polynomial of at 0. We give a more general definition of the space of branched polymers, where we do not require connectivity, and introduce the notion of q-volume for it, which is expressed through the value of the characteristic polynomial of at . Finally, we relate the volume of the space of branched polymers to broken circuits and show that the cohomology ring of the space of branched polymers is isomorphic to the Orlik-Solomon algebra.", "keywords": "polymers;braid arrangement;hyperplane arrangement;characteristic polynomial;broken circuit;orlik-solomon algebra", "title": "Branched Polymers and Hyperplane Arrangements"} {"abstract": "Mobile networks are becoming increasingly popular as means for distributing inform tion to large number of users. In comparison to wired networks, mobile networks distinguished by potentially much higher variability in demand due to user mobility. Most previous content techniques ssume static client demand distribution and, thus, may not perform well in mobile networks.This paper proposes and analyzes Mobile Dynamic Content Distribution Network model, which takes demand variations into account to decide whether to replicate content and whether to remove previously created replicas in order to minimize total network traffic. We develop two solutions to our model: an offline optimal, which provides an ideal lower-bound on the total traffic, and practical heuristic online algorithm, which uses demand forecasting to make replication decisions. We provide thorough evaluation of our solutions, comparing them against ACDN, the only previous dynamic content placement algorithm targeting bandwidth minimization that we are aware of. Our results show that our online algorithm significantly outperforms the ACDN one, reducing total network traffic by up to 85% in a number of experiments covering large system design space.", "keywords": "cdn;simulation;demand forecasting;mobile network;dynamic content placement;online algorithm", "title": "mobile dynamic content distribution networks"} {"abstract": "Organizations implement data warehouses to overcome the limitations of DSS by adding this database component and thereby improve decision performance. However, no empirical evidence is available to show the effects of a data warehouse (DW) on decision quality and performance. To examine this, a laboratory experiment was conducted. The data warehouse variables considered were the time horizon of the data and its level of aggregation. It was found that using a full data warehouse resulted in significantly better performance and that using it resulted in better performance than using a partial data warehouse (long-time history with no aggregated data). However, using a partial data warehouse was not significantly better than not using a data warehouse at all.", "keywords": "data warehouse;decision support systems;is success;database management;decision performance", "title": "An empirical investigation of the effects of data warehousing on decision performance"} {"abstract": "Object database management systems (ODBMSs) are now established as the database management technology of choice for a range of challenging data intensive applications. Furthermore, the applications associated with object databases typically have stringent performance requirements, and some are associated with very large data sets. An important feature for the performance of object databases is the speed at which relationships can be explored. In queries, this depends on the effectiveness of different join algorithms into which queries that follow relationships can be compiled. This paper presents a performance evaluation of the Polar parallel object database system, focusing in particular on the performance of parallel join algorithms. Polar is a parallel, shared-nothing implementation of the Object Database Management Group (ODMG) standard for object databases. The paper presents an empirical evaluation of queries expressed in the ODMG Query Language (OQL), as well as a cost model for the parallel algebra that is used to evaluate OQL queries. The cost model is validated against the empirical results for a collection of queries using four different join algorithms, one that is value based and three that are pointer based. ", "keywords": "object database;parallel databases;odmg;oql;benchmark;cost model", "title": "Measuring and modelling the performance of a parallel ODMG compliant object database server"} {"abstract": "We examined the classification and prognostic scoring performances of several computer methods on different feature sets to obtain objective and reproducible analysis of estrogen receptor status in breast cancer tissue samples. Radial basis function network, k-nearest neighborhood search, support vector machines, naive bayes, functional trees, and k-means clustering algorithm were applied to the test datasets. Several features were employed and the classification accuracies of each method for these features were examined. The assessment results of the methods on test images were also experimentally compared with those of two experts. According to the results of our experimental work, a combination of functional trees and the naive bayes classifier gave the best prognostic scores indicating very good kappa agreement values (?=0.899 and ?=0.949, p<0.001) with the experts. This combination also gave the best dichotomization rate (96.3%) for assessment of estrogen receptor status. Wavelet color features provided better classification accuracy than Laws texture energy and co-occurrence matrix features.", "keywords": "image processing;classification;nucleus segmentation;estrogen receptor status evaluation;breast cancer prognosis", "title": "Performance comparison of machine learning methods for prognosis of hormone receptor status in breast cancer tissue samples"} {"abstract": "A Chinese handwriting database named HIT-MW is presented to facilitate the offline Chinese handwritten text recognition. Both the writers and the texts for handcopying are carefully sampled with a systematic scheme. To collect naturally written handwriting, forms are distributed by postal mail or middleman instead of face to face. The current version of HIT-MW includes 853 forms and 186,444 characters that are produced under an unconstrained condition without preprinted character boxes. The statistics show that the database has an excellent representation of the real handwriting. Many new applications concerning real handwriting recognition can be supported by the database.", "keywords": "standardization;data acquisition;optical character recognition;handwritten chinese text", "title": "Corpus-based HIT-MW database for offline recognition of general-purpose Chinese handwritten text"} {"abstract": "An accurate and fast method for fault diagnosis is an important issue most techniques have sought to. For this reason, a fast fault diagnosis method based on Walsh transform and rough sets is proposed in this paper. Firstly, fault signals are fast transformed by Walsh matrix, and the Walsh spectrums are obtained, whose statistical characteristics constitute feature vectors. Secondly, the feature vectors are discretized and reduced by the rough sets theory, as a result, key features are retained and diagnosis rules are provided. Finally, utilized these diagnosis rules, fault diagnosis is carried out experimentally in the spectrum domain and its performance is compared with that of other methods, the higher accuracy is achieved and much time is saved, which fully validates the effectiveness of our approach.", "keywords": "fault diagnosis;walsh transform;rough sets;discernable matrix;attribution reduction", "title": "Fault diagnosis based on Walsh transform and rough sets"} {"abstract": "In this paper, we present a weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems, where the antecedent variables appearing in the fuzzy rules have different weights. We also present a weights-learning algorithm to automatically learn the optimal weights of the antecedent variables of the fuzzy rules for the proposed weighted fuzzy interpolative reasoning method. We also apply the proposed weighted fuzzy interpolative reasoning method and the proposed weights-learning algorithm to handle the truck backer-upper control problem. The experimental results show that the proposed fuzzy interpolative reasoning method using the optimally learned weights by the proposed weights-learning algorithm gets better truck backer-upper control results than the ones by the traditional fuzzy inference system and the existing fuzzy interpolative reasoning methods. The proposed method provides us with a useful way for fuzzy rules interpolation in sparse fuzzy rule-based systems.", "keywords": "fuzzy interpolative reasoning;sparse fuzzy rule-based systems;weighted antecedent variables;weights-learning algorithm", "title": "Weighted fuzzy interpolative reasoning for sparse fuzzy rule-based systems"} {"abstract": "The Neyman-Pearson (NP) paradigm in binary classification treats type I and type II errors with different priorities. It seeks classifiers that minimize type II error, subject to a type I error constraint under a user specified level a. In this paper, plug-in classifiers are developed under the NP paradigm. Based on the fundamental Neyman-Pearson Lemma, we propose two related plug-in classifiers which amount to thresholding respectively the class conditional density ratio and the regression function. These two classifiers handle different sampling schemes. This work focuses on theoretical properties of the proposed classifiers; in particular, we derive oracle inequalities that can be viewed as finite sample versions of risk bounds. NP classification can be used to address anomaly detection problems, where asymmetry in errors is an intrinsic property. As opposed to a common practice in anomaly detection that consists of thresholding normal class density, our approach does not assume a specific form for anomaly distributions. Such consideration is particularly necessary when the anomaly class density is far from uniformly distributed.", "keywords": "plug-in approach;neyman-pearson paradigm;nonparametric statistics;oracle inequality;anomaly detection", "title": "A Plug-in Approach to Neyman-Pearson Classification"} {"abstract": "In the boundary element method (BEM), it is well known that the presence of body force shall give rise to an additional volume integral that conventionally requires domain discretization for numerical computations. To restore the BEMs distinctive notion of boundary discretization, the present work analytically transforms the volume integral to surface ones for the body-force effect in the 3D anisotropic elasticity. On applying Greens Theorem, new fundamental solutions with explicit forms of Fourier series are introduced to facilitate the volume-to-surface transformation. The coefficients of the Fourier-series representations are determined by solving a banded matrix formulated from integrations of the constrained equation. Of no doubt, such an approach has fully restored the boundary element method as a truly boundary solution technique for analyzing 3D anisotropic elasticity involving body force. At the end, numerical verifications of the volume-to-surface integral transformation are presented. Also, such an approach has been implemented in an existing BEM code. For demonstrating the implementation, numerical examples are presented with comparisons with ANSYS analysis. To the authors knowledge, this is the first work in the open literature that reports the successful transformation for 3D anisotropic elasticity.", "keywords": "3d anisotropic elasticity;body force;volume-to-surface integral transformation;boundary element method", "title": "Analytical transformation of the volume integral in the boundary integral equation for 3D anisotropic elastostatics involving body force"} {"abstract": "We present an algorithm combining QoS and collaborative filtering for BPEL adaptation. The combination introduces collaborating filtering functionality maintaining high QoS. We exploit the sparsity of the rating matrix to tackle known issues of CF. We evaluate the approach both in terms of performance and adaptation QoS.", "keywords": "ws-bpel;adaptation;quality of service;collaborative filtering;metasearch algorithms", "title": "An integrated framework for adapting WS-BPEL scenario execution using QoS and collaborative filtering techniques"} {"abstract": "11c is a high-level parallel language that provides support for some of the most widely used algorithmic skeletons. The language has a syntax based on OpenMP-like directives and the compiler uses direct translation to MPI to produce parallel code. To evaluate the performance of our prototype compiler we present computational results for some of the skeletons available in the language on different architectures. Particularly in the presence of coarse-grain parallelism, the results reflect similar performances for the 11c compiled version and ad hoc MPI or OpenMP implementations. In all cases, the performance loss with respect to a direct MPI implementation is clearly compensated by a significantly smaller development effort. ", "keywords": "parallel programming;high-level language;algorithmic skeletons;compilers;openmp;mpi", "title": "Basic skeletons in 11c"} {"abstract": "An approach based on the decision maker's judgment is proposed by furnishing multiple solutions of part-family and machine-cell formation of a cellular manufacturing system. The reason for relying on the judgment of the decision-maker is due to the complexity and the many constraints encountered in practice. Some examples of these practical constraints are workload balancing, ill-defined systems, the existence of exceptional elements, and the presence of the various uncertain factors in the system. The basic approach is based on the concept of nearest-neighborhood between machines and parts. The procedure, which can be used to identify multiple grouping pat terns of machines and parts, consists of two algorithms: grouping and branching, association, and combining. Numerical examples are provided, especially for ill-structured problems, to illustrate the approach. ", "keywords": "group technology;cellular manufacturing;multiple grouping patterns;interactive decision making", "title": "A multisolution method for cell formation - Exploring practical alternatives in group technology manufacturing"} {"abstract": "In order to provide electricity economically and safely to users, a Distribution Automation System (DAS) monitors and operates the components of distribution systems remotely through communication networks. Fiber optic communication networks have primarily been used for DASs in Korea because of their huge bandwidth and dielectric noise immunity. However, the fiber optic communication network has some shortcomings, particularly that its installation cost and communication fees are expensive. This paper proposes a complex communication network, where WLANs are linked into a fiber optic network to expand DASs in distribution lines inexpensively. A DAS wireless bridge (DWB) is designed for the proposed communication network using IEEE 802.11a WLAN technology. Feasibility of the proposed network is checked experimentally.", "keywords": "distribution automation;ieee 802.11a wlan;fiber optic cable;wireless bridge;complex communication network", "title": "A complex communication network for distribution automation using a fiber optic network and WLANs"} {"abstract": "In this paper, the authors have proposed a method of segmenting gray level images using multiscale morphology. The approach resembles watershed algorithm in the sense that the dark (respectively bright) features which are basically canyons (respectively mountains) on the surface topography of the gray level image are gradually filled (respectively clipped) using multiscale morphological closing (respectively opening) by reconstruction with isotropic structuring element. The algorithm detects valid segments at each scale using three criteria namely growing, merging and saturation. Segments extracted at various scales are integrated in the final result. The algorithm is composed of two passes preceded by a preprocessing step for simplifying small scale details of the image that might cause over-segmentation. In the first pass feature images at various scales are extracted and kept in respective level of morphological towers. In the second pass, potential features contributing to the formation of segments at various scales are detected. Finally the algorithm traces the contours of all such contributing features at various scales. The scheme after its implementation is executed on a set of test images (synthetic as well as real) and the results are compared with those of few other standard methods. A quantitative measure of performance is also formulated for comparing the methods.", "keywords": "closing by reconstruction;gray-level image segmentation;morphological towers;multiscale morphology;opening by reconstruction;performance analysis", "title": "Multiscale morphological segmentation of gray-scale images"} {"abstract": "The business needs, the availability of huge volumes of data and the continuous evolution in Web services functions derive the need of application of data mining in the Web service domain. This article recommends several data mining applications that can leverage problems concerned with the discovery and monitoring of Web services. This article then presents a case study on applying the clustering data mining technique to the Web service usage data to improve the Web service discovery process. This article also discusses the challenges that arise when applying data mining to Web services usage data and abstract information.", "keywords": "data mining;knowledge discovery;service discovery;web services", "title": "Data mining in Web services discovery and monitoring"} {"abstract": "Our long-term goal is the development of a general framework for specifying, structuring, and interoperating provers. Our main focus is on the formalization of the architectural and implementational choices that underlie the construction of such systems. This paper has two main goals. The first is to introduce the main intuitions underlying the proposed framework. We concentrate on its use in the integration of provers. The second is the development of the notion of reasoning theory, meant as the formalization of the notion of \"implementation of the logic\" of a prover. As an example we sketch an analysis, at the reasoning theory level, of the integration of linear arithmetic into the NQTHM simplification process.", "keywords": "open mechanized reasoning system ;reasoning theory;nqthm;linear arithmetic", "title": "Reasoning theories"} {"abstract": "In motion simulations, video games and animation films, lots of interactions between characters and virtual environments are needed. Even though realistic motion data can be derived from MoCap system, motion editing and synthesis, animators must adapt these motion data to specific virtual environment manually, which is a boring and time-consuming job. Here we propose a framework to program the movements of characters and generate navigation animations in virtual environment. Given a virtual environment, a visual user interface is provided for animators to interactively generate motion scripts, describing the characters' movements in this scene and finally used to retrieve motion clips from MoCap database and generate navigation animations automatically. This framework also provides flexible mechanism for animators to get varied resulting animations by configurable table of motion bias coefficients and interactive visual user interface. ", "keywords": "human animation;motion programming;motion scripts", "title": "Automatic generation of human animation based on motion programming"} {"abstract": "The conventional wide-baseline image matching aims to establish point-to-point correspondence pairs across the two images under matching. This is normally accomplished by identifying those feature points with most similar local features represented by feature descriptors and measuring the feature-vector distance based on the nearest neighbor matching criterion. However, a large number of mismatches would be incurred especially when the two images under comparison have a large viewpoint variation with respect to each other or involve very different backgrounds. In this paper, a new mismatch removal method is proposed by utilizing the bipartite graph to first establish one-to-one coherent region pairs (CRPs), which are then used to verify whether each point-to-point matching pair is a correct match or not. The generation of CRPs is achieved by applying the Hungarian method to the bipartite graph, together with the use of the proposed region-to-region similarity measurement metric. Extensive experimental results have demonstrated that our proposed mismatch removal method is able to effectively remove a significant number of mismatched point-to-point correspondences.", "keywords": "wide-baseline image matching;region correspondence;point correspondence;coherent region pair;sift feature descriptor;bipartite graph matching;region similarity measurement metric;hungarian method", "title": "Bipartite graph-based mismatch removal for wide-baseline image matching"} {"abstract": "In this paper we present the first comparative study of evolutionary classifiers for the problem of road detection. We use seven evolutionary algorithms ( GAssist-ADI, XCS, UCS, cAnt, EvRBF,Fuzzy-AB and FuzzySLAVE ) for this purpose and to develop better understanding we also compare their performance with two well-known non-evolutionary classifiers ( kNN, C4.5 ). Further we identify vision based features that enable a single classifier to learn to successfully classify a variety of regions in various roads as opposed to training a new classifier for each type of road. For this we collect a real-world dataset of road images of various roads taken at different times of the day. Then, using Information Gain (I.G) and CfsSubsetMerit values we evaluate the efficacy of our features in facilitating the detection. Our results indicate that intelligent features coupled with right evolutionary technique provides a promising solution for the domain of road detection.", "keywords": "road detection", "title": "performance evaluation of evolutionary algorithms for road detection"} {"abstract": "This paper addresses the modeling of the static and dynamic parts of the scenario and how to use this information with a sensor-based motion planning system. The contribution in the modeling aspect is a formulation of the detection and tracking of mobile objects and the mapping of the static structure in such a way that the nature (static/dynamic) of the observations is included in the estimation process. The algorithm provides a set of filters tracking the moving objects and a local map of the static structure constructed on line. In addition, this paper discusses how this modeling module is integrated in a real sensor-based motion planning system taking advantage selectively of the dynamic and static information. The experimental results confirm that the complete navigation system is able to move a vehicle in unknown and dynamic scenarios. Furthermore, the system overcomes many of the limitations of previous systems associated to the ability to distinguish the nature of the parts of the scenario.", "keywords": "mobile robots;mapping dynamic environments;sensor-based motion planning", "title": "Modeling dynamic scenarios for local sensor-based motion planning"} {"abstract": "Monotonic classification plays an important role in the field of decision analysis, where decision values are ordered and the samples with better feature values should not be classified into a worse class. The monotonic classification tasks seem conceptually simple, but difficult to utilize and explain the order structure in practice. In this work, we discuss the issue of feature selection under the monotonicity constraint based on the principle of large margin. By introducing the monotonicity constraint into existing margin based feature selection algorithms, we design two new evaluation algorithms for monotonic classification. The proposed algorithms are tested with some artificial and real data sets, and the experimental results show its effectiveness.", "keywords": "monotonic classification;ordinal classification;monotonicity constraint;feature selection;classification margin", "title": "Large-margin feature selection for monotonic classification"} {"abstract": "UML notations require adaptation for applications such as Information Systems (IS). Thus we have defined IS-UML. The purpose of this article is twofold. First, we propose an extension to this language to deal with functional aspects of IS. We use two views to specify IS transactions: the first one is defined as a combination of behavioural UML diagrams (collaboration and state diagrams), and the second one is based on the definition of specific classes of an extended class diagram. The final objective of the article is to consider consistency issues between the various diagrams of an IS-UML specification. In common with other UML languages, we use a metamodel to define IS-UML. We use class diagrams to summarize the metamodel structure and a formal language, B, for the full metamodel. This allows us to formally express consistency checks and mapping rules between specific metamodel concepts.", "keywords": "information system design;unified modelling language notation;metamodel;formal notation", "title": "Using formal metamodels to check consistency of functional views in information systems specification"} {"abstract": "Dense regions in digital mammographic images are usually noisy and have low contrast, and their visual screening is difficult. This paper describes a new method for mammographic image noise suppression and enhancement, which can be effective particularly for screening image dense regions. Initially, the image is preprocessed to improve its local contrast and the discrimination of subtle details. Next, image noise suppression and edge enhancement are performed based on the wavelet transform. At each resolution, coefficients associated with noise are modelled by Gaussian random variables; coefficients associated with edges are modelled by Generalized Laplacian random variables, and a shrinkage function is assembled based on posterior probabilities. The shrinkage functions at consecutive scales are combined, and then applied to the wavelets coefficients. Given a resolution of analysis, the image denoising process is adaptive (i.e. does not require further parameter adjustments), and the selection of a gain factor provides the desired detail enhancement. The enhancement function was designed to avoid introducing artifacts in the enhancement process, which is essential in mammographic image analysis. Our preliminary results indicate that our method allows to enhance local contrast, and detect microcalcifications and other suspicious structures in situations where their detection would be difficult otherwise. Compared to other approaches, our method requires less parameter adjustments by the user.", "keywords": "mammography;contrast equalization;image denoising;image enhancement;multiresolution analysis;wavelets", "title": "Denoising and enhancing digital mammographic images for visual screening"} {"abstract": "Modeling languages, like programming languages, need to be designed if they are to be practical, usable, accepted, and of lasting value. We present principles for the design of modeling languages. To arrive at these principles, we consider the intended use of modeling languages. We conject that the principles are applicable to the development of new modeling languages, and for improving the design of existing modeling languages that have evolved, perhaps through a process of unification. The principles are illustrated and explained by several examples, drawing on object-oriented and mathematical modeling languages.", "keywords": "modeling languages;design principles;uml;unification", "title": "Principles for modeling language design"} {"abstract": "The thesis presented here is that the result of engineering is the design, construction, or operation of systems or their subsystems and components and that the teaching of systems must be central to engineering education. It is maintained that current undergraduate engineering curricula do not give the student adequate appreciation of this major intellectual element of their profession. Five proposals for approaches to correct this deficiency are offered: opportunities for clinical practice throughout all the undergraduate years; the use of distributed interactive simulation technology in semester-long projects; courses or course material on the phenomenology and behavior of systems; use of project management tools in engineering clinics; and encouraging engineering faculty to spend some part of their sabbaticals engaged in system design or operation. Issues of implementation are addressed, including the scaling of these ideas to universities that must meet the needs of large numbers of students.", "keywords": "system;systems of systems;engineering education;clinical practice;engineering practice;simulation", "title": "Systems, systems of systems, and the education of engineers"} {"abstract": "In this paper we combine priority encoding transmission (PET) with a limited retransmission (LR) capacity. We propose the resulting LR-PET scheme as a framework for efficient RD optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. In the proposed LR-PET framework, an optimization algorithm determines the level of protection for each element in each transmission slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements being transmitted for the first time with those being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show that this formulation of hypotheses is central to the success of the proposed LR-PET algorithm. Indeed, without this element, a greedy version of LR-PET performs only slightly better than PET without retransmission. Experimental results are reported using both IID and GE channel models, with a Motion JPEG2000 video source, demonstrating substantial performance benefits from the proposed framework.", "keywords": "channel coding;forward error correction;retransmission;scalable video;erasure channels;priority encoding transmission;bursty loss", "title": "Optimal erasure protection for scalably compressed video streams with limited retransmission on channels with IID and bursty loss characteristics"} {"abstract": "The paper deals with the verification of three pipeline models: the non-linear distributed parameters model, the linear distributed parameter model and the linear lumped parameters model. All the models were comparatively verified on the basis of the measurements on a real pipeline. ", "keywords": "pipeline models;pade approximation;pipesim", "title": "Verification of various pipeline models"} {"abstract": "In this paper, we have reviewed 25 test procedures that are widely reported in the literature for testing the hypothesis of homogeneity of variances under various experimental conditions. Since a theoretical comparison was not possible, a simulation study has been conducted to compare the performance of the test statistics in terms of robustness and empirical power. Monte Carlo simulation was performed for various symmetric and skewed distributions, number of groups, sample size per group, degree of group size inequalities, and degree of variance heterogeneity. Using simulation results and based on the robustness and power of the tests, some promising test statistics are recommended for practitioners.", "keywords": "anova;homogeneity of variances;levene's test;monte carlo simulation;power of test;robustness;type i error rate", "title": "On some test statistics for testing homogeneity of variances: a comparative study"} {"abstract": "In this paper, the synchronization control of stochastic memristor-based neural networks with mixed delays is studied. Based on the drive-response concept, the stochastic differential inclusions theory and Lyapunov functional method some new criteria are established to guarantee the exponential synchronization in the pth moment of stochastic memristor-based neural networks. The obtained sufficient conditions can be checked easily and improve the results in earlier publications. Finally, a numerical example is given to illustrate the effectiveness of the new scheme.", "keywords": "memristor;stochastic neural networks;mixed delays;synchronization control", "title": "Synchronization control of stochastic memristor-based neural networks with mixed delays"} {"abstract": "This paper is an overview of some of the methods developed by the Team for Advanced Flow Simulation and Modeling (T*AFSM) [http://www, mems, rice. edu/TAFSM/] to support flow simulation and modeling in a number of \"Targeted Challenges\". The \"Targeted Challenges\" include unsteady flows with interfaces, fluid-object and fluid-structure interactions, airdrop systems, and air circulation and contaminant dispersion. The methods developed include special numerical stabilization methods for compressible and incompressible flows, methods for moving boundaries and interfaces, advanced mesh management methods, and multi-domain computational methods. We include in this paper a number of numerical examples from the simulation of complex flow problems. ", "keywords": "computational fluid dynamics;flow simulation;stabilization methods;compressible flow;incompressible flow;multidomain computational methods", "title": "Methods for parallel computation of complex flow problems"} {"abstract": "This paper proposes an environmental sensor bridge system named COSPI for autonomous communication robots. In the near future, various sensors will get installed in day-to-day environments and robots will use information from these sensors to recognize their surroundings. To this end, different sensors can be installed in different environments. COSPI enables the robots to work with different sensor configuration environments without requiring any robot software reconfiguration. The basic idea of COSPI is the abstraction of recognition types. We investigate 278 recognition routines in behavior modules of a communication robot that works in practical situations and classify them based on communication cues of the communication robot. COSPI facilitates robot behavior development as robot behavior and sensor processing procedures can be independently developed. Furthermore, COSPI encourages the reuse of environmental sensor modules for other robot tasks or applications within the same environment. We have conducted an experiment to confirm that a robot can work in a visual sensor environment, in an infrared ray and pyroelectric sensor environment, and in an optical motion capture system environment. The results showed that robots can work using sensor information from environmental sensors in all these environments.", "keywords": "human-robot interaction;communication robot;sensor network;robot behavior development", "title": "Environmental sensor bridge system for communication robots"} {"abstract": "Recently, Biskup [2] classifies the learning effect models in scheduling environments into two types: position-based and sum-of-processing-time-based. In this paper, we study scheduling problem with sum-of-logarithm-processing-time-based and position-based learning effects. We show that the single machine scheduling problems to minimize the makespan and the total completion time can both be solved by the smallest (normal) processing time first (SPT) rule. We also show that the problems to minimize the maximum lateness, the total weighted completion times and the total tardiness have polynomial-time solutions under agreeable WSPT rule and agreeable EDD rule. In addition, we show that m-machine permutation flowshop problems are still polynomially solvable under the proposed learning model.", "keywords": "scheduling;single-machine;flowshop;learning effect", "title": "A note on machine scheduling with sum-of-logarithm-processing-time-based and position-based learning effects"} {"abstract": "A great deal of interest has been paid to induction machine control over the last years. However, most previous works have focused on the speed/flux/torque regulation supposing the machine magnetic circuit to be linear and ignoring the machine power conversion equipments. The point is that speed regulation cannot be ensured in optimal efficiency conditions, for a wide range of speed-set-point and load torque, unless the magnetic circuit nonlinearity is explicitly accounted for in the motor model. On the other hand, the negligence of the power conversion equipments makes it impossible to deal properly with the harmonic pollution issue due to motor power supply grid interaction. This paper presents a theoretical framework for a global control strategy of the induction machine and related power equipments. The proposed strategy involves a multi-loop nonlinear adaptive controller designed to meet the three main control objectives, i.e. tight speed regulation for a wide range speed-reference variation, flux optimization for energy consumption and power factor correction (PFC). Tools from the averaging theory are resorted to formally describe the control performances.", "keywords": "induction machine;magnetic circuit nonlinearity;ac/dc/ac converters;speed regulation;power factor correction;backstepping technique", "title": "Towards a global control strategy for induction motor: Speed regulation, flux optimization and power factor correction"} {"abstract": "In this paper, numerical solution of fuzzy Fredholm linear integral equations is considered by applying Sinc method based on double exponential transformation with dual fuzzy linear systems. For this purpose, we convert the given fuzzy integral equation to a fuzzy linear system of equation. In this case, the Sinc collocation method with double exponential transformation is used. Numerical examples are provided to verify the validity of the proposed algorithm.", "keywords": "fuzzy number;fuzzy fredholm integral equations;sinc method;dual systems;double exponential transformation", "title": "Solving fuzzy Fredholm linear integral equations using Sinc method and double exponential transformation"} {"abstract": "This paper focuses on the problem of adaptive control for a class of nonlinear time-delay systems with unknown nonlinearities and strict-feedback structure. Based on the Lyapunov-Krasovskii functional approach, a state-feedback adaptive controller is constructed by backstepping. The proposed adaptive controller guarantees that the system output converges into a small neighborhood of the reference signal, and all the signals of the closed-loop system remain bounded. Compared with the results that exist, the main advantage of the proposed method is that the controller design is independent of the choice of the fuzzy-membership functions; therefore, a priori knowledge of fuzzy approximators is not necessary for control design, and the proposed approach requires only one adaptive law for an nth-order system. Two numerical examples are used to illustrate the effectiveness of the proposed approach.", "keywords": "adaptive control;backstepping;fuzzy-logic systems;nonlinear systems;time delays", "title": "Fuzzy-Approximation-Based Adaptive Control of Strict-Feedback Nonlinear Systems With Time Delays"} {"abstract": "Geographic Routing is a family of routing algorithms that uses geographic point locations as addresses for the purposes of routing. Such routing algorithms have proven to be both simple to implement and heuristically effective when applied to wireless sensor networks. Greedy Routing is a natural abstraction of this model in which nodes are assigned virtual coordinates in a metric space, and these coordinates are used to perform point-to-point routing. Here we resolve a conjecture of Papadimitriou and Ratajczak that every 3-connected planar graph admits a greedy embedding into the Euclidean plane. This immediately implies that all 3-connected graphs that exclude K (3,3) as a minor admit a greedy embedding into the Euclidean plane. We also prove a combinatorial condition that guarantees nonembeddability. We use this result to construct graphs that can be greedily embedded into the Euclidean plane, but for which no spanning tree admits such an embedding.", "keywords": "greedy embedding;papadimitriou-ratajczak conjecture;christmas cactus graph;excluded minor", "title": "Some Results on Greedy Embeddings in Metric Spaces"} {"abstract": "This paper proposes a novel framework for 3-D object retrieval, taking into account most of the factors that may affect the retrieval performance. Initially, a novel 3-D model alignment method is introduced, which achieves accurate rotation estimation through the combination of two intuitive criteria, plane reflection symmetry and rectilinearity. After the pose normalization stage, a low-level descriptor extraction procedure follows, using three different types of descriptors, which have been proven to be effective. Then, a novel combination procedure of the above descriptors takes place, which achieves higher retrieval performance than each descriptor does separately. The paper provides also an in-depth study of the factors that can further improve the 3-D object retrieval accuracy. These include selection of the appropriate dissimilarity metric, feature selection/dimensionality reduction on the initial low-level descriptors, as well as manifold learning for re-ranking of the search results. Experiments performed on two 3-D model benchmark datasets confirm our assumption that future research in 3-D object retrieval should focus more on the efficient combination of low-level descriptors as well as on the selection of the best features and matching metrics, than on the investigation of the optimal 3-D object descriptor.", "keywords": "3-d object retrieval;descriptor extraction;feature selection;manifold learning;rotation estimation", "title": "Investigating the Effects of Multiple Factors Towards More Accurate 3-D Object Retrieval"} {"abstract": "This paper deals with nonlinear hydrodynamic modelling of traffic flow on roads and with the solution of related nonlinear initial and boundary value problems. The paper is in two parts. The first one provides the general framework of hydrodynamic modelling of traffic flow. Some new models are proposed and related to the ones which are known in the literature. The second one is on mathematical methods related to the solution of initial-boundary value problems. A critical analysis and an overview on research perspectives conclude the paper. ", "keywords": "nonlinear hydrodynamics;traffic models;nonlinear sciences;evolution equations", "title": "Nonlinear hydrodynamic models of traffic flow modelling and mathematical problems"} {"abstract": "We present a novel line drawing approach for 3D models by introducing their skeleton information into the rendering process. Based on the silhouettes of the input 3D models, we first extract feature lines in geometric regions by utilizing their curvature, torsion and view-dependent information. Then, the skeletons of the models are extracted by our newly developed skeleton extraction algorithm. After that, we draw the skeleton-guided lines from non-geometric regions through the skeleton information. These lines are combined with the feature lines to render the final line drawing result using the line optimization. Experimental results show that our algorithm can render line drawings more effectively with enhanced skeletons. The resulting artistic effects can capture the local geometries as well as the global skeletons of the input 3D models.", "keywords": "non-photorealistic rendering;line drawing;skeleton;geometric feature", "title": "Skeleton-enhanced line drawings for 3D models"} {"abstract": "In this paper, we propose a low-complexity video coding scheme based upon 2-D singular value decomposition (2-D SVD), which exploits basic temporal correlation in visual signals without resorting to motion estimation (ME). By exploring the energy compaction property of 2-D SVD coefficient matrices, high coding efficiency is achieved. The proposed scheme is for the better compromise of computational complexity and temporal redundancy reduction, i.e., compared with the existing video coding methods. In addition, the problems caused by frame decoding dependence in hybrid video coding, such as unavailability of random access, are avoided. The comparison of the proposed 2-D SVD coding scheme with the existing relevant non-ME-based low-complexity codecs shows its advantages and potential in applications.", "keywords": "computational complexity;energy compaction;frame independence;simultaneous low-rank approximation of matrices;video decomposition", "title": "Low-Complexity Video Coding Based on Two-Dimensional Singular Value Decomposition"} {"abstract": "OSWALD (Object-oriented Software for the Analysis of Longitudinal Data) is flexible and powerful software written for S-PLUS for the analysis of longitudinal data with dropout for which there is little other software available in the public domain. The implementation of OSWALD is described through analysis of a psychiatric clinical trial that compares antidepressant effects in an elderly depressed sample and a simulation study. In the simulation study, three different dropout mechanisms: completely random dropout (CRD), random dropout (RD) and informative dropout (ID), are considered and the results from using OSWALD are compared across mechanisms. The parameter estimates for ID-simulated data show less bias with OSWALD under the ID missing data assumption than under the CRD or RD assumptions. Under an ID mechanism, OSWALD does not provide standard error estimates. We supplement OSWALD with a bootstrap procedure to derive the standard errors. This report illustrates the usage of OSWALD for analyzing longitudinal data with dropouts and how to draw appropriate conclusions based on the analytic results under different assumptions regarding the dropout mechanism. ", "keywords": "oswald;non-ignorable missing;longitudinal data analyses;simulation", "title": "Use of OSWALD for analyzing longitudinal data with informative dropout"} {"abstract": "Because of the foreseeing depletion of Internet Protocol (IP) addresses, Network Address Translation (NAT) is ubiquitously deployed to allow hosts to connect to the Internet through a single shared public IP address, which is a popular approach in deploying wireless local area network (WLAN). Although NAT proves to work well with traditional client/server applications, its existence and non-standard behaviors are the major problem which cripples voice over IP (VoIP) applications. In addition to some efforts which attempt to devise complicated protocols to tackle all NAT varieties, there are also efforts in Internet communities trying to standardize the behaviors of NAT. Therefore, it becomes crucial for a network device to discover the existence of NAT in its subnet and to determine the NAT behaviors, so that it can choose the optimal NAT traversal mechanisms to apply. In this paper, we surveyed the divergent NAT behaviors and then proposed a simplified NAT behavior discovery approach which is more suitable for VoIP applications. The proposed approach can reduce the call establishment time of VoIP applications, which is useful in scenarios where VoIP devices are administrated within a specific domain, e.g., 3G cellular networks.", "keywords": "nat;stun;nat behavior discovery", "title": "A Survey of NAT Behavior Discovery in VoIP Applications"} {"abstract": "Embedded cores are being increasingly used in the design of large system-on-a-chip (SoC). Because of the high complexity of SoC, the design verification is a challenge for system integrators. To reduce the verification complexity, the port-order fault (POF) model was proposed. It has been used for verifying core-based designs and the corresponding verification pattern generation has been developed. Here, the authors present an automorphic technique to improve the efficiency of the automatic verification pattern generation (AVPG) for SoC design verification based on the POF model. On average, the size of pattern sets obtained on the ISCAS-85 and MCNC benchmarks are 45 % smaller and the run time decreases 16 % as compared with the previous results of AVPG.", "keywords": "automatic verification pattern generation ;automorphism;characteristic vector ;port-order fault ;soc;superset of all automorphism ;verification", "title": "An automorphic approach to verification pattern generation for SoC design verification using port-order fault model"} {"abstract": "Relational database systems have traditionally optimized for I/O performance and organized records sequentially on disk pages using the N-ary Storage Model (NSM) (a.k.a., slotted pages). Recent research, however, indicates that cache utilization and performance is becoming increasingly important on modern platforms. In this paper, we first demonstrate that in-page data placement is the key to high cache performance and that NSM exhibits low cache utilization on modern platforms. Next, we propose a new data organization model called PAX (Partition Attributes Across), that significantly improves cache performance by grouping together all values of each attribute within each page. Because PAX only affects layout inside the pages, it incurs no storage penalty and does not affect I/O behavior. According to our experimental results (which were obtained without using any indices on the participating relations), when compared to NSM: (a) PAX exhibits superior cache and memory bandwidth utilization, saving at least 75% of NSM's stall time due to data cache accesses; (b) range selection queries and updates on memory-resident relations execute 17-25% faster; and (c) TPC-H queries involving I/O execute 11-48% faster. Finally, we show that PAX performs well across different memory system designs.", "keywords": "relational data placement;disk page layout;cache-conscious database systems", "title": "Data page layouts for relational databases on deep memory hierarchies"} {"abstract": "The increased deregulation of electricity markets in most nations of the world in recent years has made it imperative that electricity utilities design accurate and efficient mechanisms for determining locational marginal price (LMP) in power systems. This paper presents a comparison of two soft computing-based schemes: Artificial neural networks and support vector machines for the projection of LMP. Our system has useful power system parameters as inputs and the LMP as output. Experimental results obtained suggest that although both methods give highly accurate results, support vector machines slightly outperform artificial neural networks and do so with manageable computational time costs.", "keywords": "locational marginal price;artificial neural networks;support vector machines;back propagation learning algorithm;radial basis function", "title": "A soft computing approach to projecting locational marginal price"} {"abstract": "Two CMOS integrated circuits are presented that utilize metamaterial composite right/left handed (CRLH) transmission lines (TLs) for zero insertion phase at 30 GHz. Specifically, 2 and 3 unit cell structures are presented with controlled insertion phase that is achieved by cascading lumped element capacitors and spiral inductors in an LC network configuration defining the TL unit cells. Furthermore, the fixed TL structures suggest the possibility of zero, advanced or delayed insertion phases by element variation, or by the use of simple active components. Simulation and measured results are in good agreement with CRLH TL theory. and display a linear insertion phase and flat group delay values that are dependent on the number of unit cells with an insertion loss of similar to 0.8 dB per cell. These findings suggest that such high speed CRLH TLs structures can he implemented for linear array feeding networks and compact antenna designs in CMOS at millimeter wave frequencies. .", "keywords": "composite right and left handed ;transmission lines ;right-handed ;left-handed ;group dealy ;metal-insulator-metal ;complementary metal oxide semiconductor ", "title": "Composite Right/Left Handed Artificial Transmission Line Structures in CMOS for Controlled Insertion Phase at 30 GHz"} {"abstract": "It has been empirically established that multiobjective evolutionary algorithms do not scale well with the number of conflicting objectives. This paper shows that the convergence rate of all comparison-based multi-objective algorithms, for the Hausdorff distance, is not much better than the convergence rate of the random search under certain conditions. The number of objectives must be very moderate and the framework should hold the following assumptions: the objectives are conflicting and the computational cost is lower bounded by the number of comparisons is a good model. Our conclusions are: (i) the number of conflicting objectives is relevant (ii) the criteria based on comparisons with random-search for multi-objective optimization is also relevant (iii) having more than 3-objectives optimization is very hard. Furthermore, we provide some insight into cross-over operators.", "keywords": "theory;randomized search heuristics;multi-objective optimization", "title": "On the hardness of offline multi-objective optimization"} {"abstract": "This paper demonstrates the existence of spatial markets for business advice services. A large sample of 3245 clientadvisor links is investigated using GIS software. Seventy per cent of links are less than 25 km in extent, 93% are to the nearest local business centre, and only a few are with hinterlands or areas peripheral to main centres. The maximum reach of market areas varies by advisor type, averaging only 2540 km for chambers of commerce and public sector advice services such as Business Link. The maximum reach is 48 km for accountants and banks, and increases to 64 km for customers and suppliers and 74 km for consultants. A threshold for regional level services in major centres can be identified, which ranges from 12,000 to 24,000 businesses in size, depending on advisor type. Service sector firms are generally more localised than manufacturing, and local sourcing of advisors generally declines with firm size and size of business centre. Regional differences are relatively small, but Scotland, Yorkshire and Humberside are the most self-contained for advice, whilst London and the South-East are the least self-contained. This is a contrast to earlier findings by OFarrell and others. The paper demonstrates a hierarchical and spatial market structure for business advice services that is similar to that in retailing, with firm size and advisor type being the primary influence on differences in demand, and with regional centres most distinct from local centres of supply. Intense localised sourcing of advice from customers and suppliers does not appear to be frequent.", "keywords": "consultancy;business services;local networks;agglomeration;gis;business link;chambers of commerce", "title": "The spatial market of business advice and consultancy to SMEs"} {"abstract": "The issue of Quality of Service (QoS) performance analysis in packet-switched networks has drawn a lot of attention in the networking community. There is a lot of work including an elegant theory under the name of network calculus, which focuses on analysis of deterministic worst case QoS performance bounds. In the meantime, researchers have studied stochastic QoS performance for specific schedulers. However, most previous works on deterministic QoS analysis or stochastic QoS analysis have only considered a server that provides deterministic service, i.e.deterministically bounded rate service. Few have considered the behavior of a stochastic server that provides input flows with variable rate service, for example wireless links. In this paper, we propose a stochastic network calculus to analyze the end-to-end stochastic QoS performance of a system with stochastically bounded input traffic over a series of deterministic and stochastic servers. We also prove that a server serving an aggregate of flows can be regarded as a stochastic server for individual flows within the aggregate. Based on this, the proposed framework is further applied to analyze per-flow stochastic QoS performance under aggregate scheduling.", "keywords": "network calculus;quality of service;generalized stochastically bounded burstiness ;stochastic service curve", "title": "A calculus for stochastic QoS analysis"} {"abstract": "For a sequence of dynamic optimization problems, we aim at discussing a notion of consistency over time. This notion can be informally introduced as follows. At the very first time stept 0, the decision maker formulates an optimization problem that yields optimal decision rules for all the forthcoming time stepst 0,t 1,,T; at the next time stept 1, he is able to formulate a new optimization problem starting at timet 1 that yields a new sequence of optimal decision rules. This process can be continued until the final timeT is reached. Afamily of optimization problems formulated in this way is said to be dynamically consistent if the optimal strategies obtained when solving the original problem remain optimal for all subsequent problems. The notion of dynamic consistency, well-known in the field of economics, has been recently introduced in the context of risk measures, notably by Artzner et al. (Ann. Oper. Res. 152(1):522, 2007) and studied in the stochastic programming framework by Shapiro (Oper. Res. Lett. 37(3):143147, 2009) and for Markov Decision Processes (MDP) by Ruszczynski (Math. Program. 125(2):235261, 2010). We here link this notion with the concept of state variable in MDP, and show that a significant class of dynamic optimization problems are dynamically consistent, provided that an adequate state variable is chosen.", "keywords": "stochastic optimal control;dynamic consistency;dynamic programming;risk measures", "title": "Dynamic consistency for stochastic optimal control problems"} {"abstract": "Many engineering design optimization problems contain multiple objective functions all of which are desired to be minimized, say. This paper proposes a method for identifying the Pareto Front and the Pareto Set of the objective functions when these functions are evaluated by expensive-to-evaluate deterministic computer simulators. The method replaces the expensive function evaluations by a rapidly computable approximator based on a Gaussian process (GP) interpolator. It sequentially selects new input sites guided by values of an improvement function given the current data. The method introduced in this paper provides two advances in the interpolator/improvement framework. First, it proposes an improvement function based on the modified maximin fitness function which is known to identify well-spaced non-dominated outputs when used in multiobjective evolutionary algorithms. Second, it uses a family of GP models that allows for dependence among output function values but which permits zero covariance should the data be consistent with this model. A closed-form expression is derived for the improvement function when there are two objective functions; simulation is used to evaluate it when there are three or more objectives. Examples from the multiobjective optimization literature are presented to show that the proposed procedure can improve substantially previously proposed statistical improvement criteria for the computationally intensive multiobjective optimization setting.", "keywords": "computer experiment;gaussian process;kriging;pareto optimization;nonseparable gp model;computer simulator model", "title": "Multiobjective optimization of expensive-to-evaluate deterministic computer simulator models"} {"abstract": "We used c-Fos-deficient activated T cells from the spleen and c-Fos-deficient thymocytes to address the capacity of these cells to undergo apoptosis in response to various stimuli. To determine the role of c-Fos in apoptosis regulation in thymocytes, we challenged thymocytes from wild-type and c-Fos-deficient mice with either TPA or the glucocorticoid dexamethasone. After various time points cells were stained according to the Nicoletti method and analyzed by FACS. Thymocytes from both genotypes exhibited similar efficiency of apoptosis in response to treatment with TPA or dexamethasone. Our data provide clear evidence that c-Fos is not required for apoptosis regulation in activated T cells as well as in thymocytes.", "keywords": "c-fos;apoptosis;early activation;induction;t cells", "title": "Early Activation and Induction of Apoptosis in T Cells Is Independent of c-Fos"} {"abstract": "A 21 degree-of-freedom element, based on the FSDT, is derived to study the response of unsymmetrically laminated composite structures subject to both static and dynamic problems. In the FSDT model used here we have employed an accurate model to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration. The element consists of four equally spaced nodes and a node at the middle. The results for the one-dimensional case are within 5% when compared to equivalent one and two-dimensional problems of static loading, free vibrations and buckling loads.", "keywords": "nonlinear finite element;condensation;shear deformation;laminated composites", "title": "A shear-deformable beam element for the analysis of laminated composites"} {"abstract": "Medical message boards are online resources where users with a particular condition exchange information, some of which they might not otherwise share with medical providers. Many of these boards contain a large number of posts and contain patient opinions and experiences that would be potentially useful to clinicians and researchers. We present an approach that is able to collect a corpus of medical message board posts, de-identify the corpus, and extract information on potential adverse drug effects discussed by users. Using a corpus of posts to breast cancer message boards, we identified drug event pairs using co-occurrence statistics. We then compared the identified drug event pairs with adverse effects listed on the package labels of tamoxifen, anastrozole, exemestane, and letrozole. Of the pairs identified by our system, 7580% were documented on the drug labels. Some of the undocumented pairs may represent previously unidentified adverse drug effects.", "keywords": "data mining;information extraction;medical message board;drug adverse effect", "title": "Identifying potential adverse effects using the web: A new approach to medical hypothesis generation"} {"abstract": "This paper proposes an AHP based statistical method for the design of a comprehensive policy alternative, AHPo, for solving societal problems that require a multifaceted approach. In the proposed method, criteria relevant to the goal or focus are structured in the same way as in the conventional AHP. However, these two methods are quite different in regard to the method of quantification. The new method predicts or analyses the impact of the policy alternatives on the overall goal. In other words, it predicts or rationalizes the way people appreciate the situation in which an alternative is adopted and implemented. It will serve as a tool for supporting (especially political) decision making.", "keywords": "analytic hierarchy process ;analytic network process ;multicriteria decision;policy design;household adoption of seismic hazard adjustments", "title": "Analytic hierarchy based policy design method (AHPo) for solving societal problems that require a multifaceted approach"} {"abstract": "Large software projects consist of code written in a multitude of different (possibly domain-specific) languages, which are often deeply interspersed even in single files. While many proposals exist on how to integrate languages semantically and syntactically, the question of how to support this scenario in integrated development environments (IDEs) remains open: How can standard IDE services, such as syntax highlighting, outlining, or reference resolving, be provided in an extensible and compositional way, such that an open mix of languages is supported in a single file? Based on our library-based syntactic extension language for Java, SugarJ, we propose to make IDEs extensible by organizing editor services in editor libraries . Editor libraries are libraries written in the object language, SugarJ, and hence activated and composed through regular import statements on a file-by-file basis. We have implemented an IDE for editor libraries on top of SugarJ and the Eclipse-based Spoofax language workbench. We have validated editor libraries by evolving this IDE into a fully-fledged and schema-aware XML editor as well as an extensible Latex editor, which we used for writing this paper.", "keywords": "library;language workbench;dsl embedding;language extensibility", "title": "growing a language environment with editor libraries"} {"abstract": "The goal of this paper is to examine the classification capabilities of various prediction and approximation methods and suggest which are most likely to be suitable for the clinical setting. Various prediction and approximation methods are applied in order to detect and extract those which provide the better differentiation between control and patient data, as well as members of different age groups. The prediction methods are local linear prediction, local exponential prediction, the delay times method, autoregressive prediction and neural networks. Approximation is computed with local linear approximation, least squares approximation, neural networks and the wavelet transform. These methods are chosen since each has a different physical basis and thus extracts and uses time series information in a different way.", "keywords": "heart rate variability;prediction;approximation;mean error;cardiogram classification;ecg", "title": "Assessment of the classification capability of prediction and approximation methods for HRV analysis"} {"abstract": "Hyperspeech is a speech-only hypermedia application that explores issues of speech user interfaces, navigation, and system architecture in a purely audio environment without a visual display. The system uses speech recognition input and synthetic speech feedback to aid in navigating through a database of digitally recorded speech segments.", "keywords": "conversational interfaces;speech applications;speech synthesis;speech recognition;speech user interfaces;speech as data;hypermedia", "title": "hyperspeech"} {"abstract": "A concept of a multihop ad hoc network and associated algorithms for adaptive clustering in wireless ad hoc networks are presented in this paper. The algorithms take into account the connectivity of the stations as well as the quality of service requirements. The concept of a centralised ad hoc network is adopted, in which a cluster is defined by a Central Controller granting access to the radio interface to all terminals in its cluster. By these means the CC contributes to provide quality of service guarantees to the users. This concept is also used in the HiperLAN/2 (HL/2) Home Environment Extension (HEE), an ad hoc wireless LAN standardised by the European Telecommunications Standardisation Institute (ETSI). The HEE is restricted to one single cluster. It is shown in this article how the network can be extended over several clusters by the introduction of so-called forwarding stations. These forwarders interconnect the clusters and enable multihop connections of users roaming in different clusters. A solution is presented to ensure, as far as possible, an interconnection of clusters by means of the clustering algorithm.", "keywords": "ad hoc networks;clustering;forwarding;routing;hiperlan/2;mobility management;handover", "title": "Outline of a centralised multihop ad hoc wireless network"} {"abstract": "This work presents an implementation strategy which exploits the separation of concerns and reuse in a multi-tier architecture to improve the security (availability, integrity, and confidentiality) level of an existing application. Functional properties are guaranteed via wrapping of the existing software modules. Security mechanisms are handled by the business logic of the middle-tier: availability and integrity are achieved via replication of the functional modules and the confidentiality is obtained via cryptography. The technique is presented with regard to a case study application. We believe that our experience can be used as a guideline for software practitioners to solve similar problems. We thus describe the conceptual model behind the architecture, discuss implementation issues, and present technical solutions.", "keywords": "security;perfective maintenance;legacy software;corba;replication", "title": "An architecture for security-oriented perfective maintenance of legacy software"} {"abstract": "We are presently witnessing an increasing number of nursing, medical and health-related electronic journals (e-journals) being made available on the World Wide Web, a minority of which are specifically devoted to informatics. We would expect, given the potential of interacting multimedia and computer-mediated communications (i.e. telematics), to also see an increasing diversity of models, but this is not currently the case. Following a brief discussion of some of the issues relevant to electronic publications, the authors present a taxonomy of current nursing e-journal models, including discussion of some examples from around the world that fall into categories within this taxonomy. We describe the model and levels of usage of one particular e-journal, Nursing Standard Online. Some of the issues presented may account for the current relative paucity of high quality content and innovative models in the development of Web-based e-journals for nurses and other health professionals. We believe it likely that nursing e-journals using current models will need to be specialist rather than generalist if they are to attract a larger audience. In concluding our paper, we advocate the development of innovative and increasingly interactive nursing e-journals as the way forward, discussing one particular model which holds promise.", "keywords": "nursing;publishingtrends ;medical informaticseducation ;online systems;computer communication networks;peer review", "title": "Current and future models for nursing e-journals: making the most of the webs potential"} {"abstract": "The basis of vehicular ad hoc networks (VANETs) is the exchange of data between entities, and making a decision on received data/event is usually based on information provided by other entities. Many researchers utilize the concept of trust to assess the trustworthiness of the received data. Nevertheless, the lack of a review to sum up the best available research on specific questions on trust management in vehicular ad hoc networks is sensible. This paper presents a systematic literature review to provide comprehensive and unbiased information about various current trust conceptions, proposals, problems, and solutions in VANETs to increase quality of data in transportation. For the purpose of the writing of this paper, a total of 111 articles related to the trust model in VANETs published between 2005 and 2014 were extracted from the most relevant scientific sources (IEEE Computer Society, ACM Digital Library, Springer Link, Science Direct, and Wiley Online Library). Finally, ten articles were eventually analyzed due to several reasons such as relevancy and comprehensiveness of discussion presented in the articles. Using the systematic method of review, this paper succeeds to reveal the main challenges and requirements for trust in VANETs and future research within this scope.", "keywords": "systematic literature review;trust management;vanet;trust metric", "title": "Trust management in vehicular ad hoc network: a systematic review"} {"abstract": "We establish a QSPR inodel between the Henry's Law constant in the air - water system and the molecular structure of 150 aliphatic hydrocarbons. The simultaneous linear regression analyzes on 1086 numerical descriptors reflecting topological, geometrical, and electronic aspects lead to a seven parameter equation that, when compared to previously reported models, exhibits good calibration and cross-validated parameters: R=0.996, R(1 - 10% - o)=0.997. As a realistic application, we employ this relationship to estimate the partition coefficient for 39 non-yet measured chemicals. ", "keywords": "qspr theory;molecular descriptors;multivariable regression analysis;henry's law constant;replacement method", "title": "QSPR study of the Henry's Law constant for hydrocarbons"} {"abstract": "The Multi-Threshold CMOS (MTCMOS) technology has become a popular technique for standby power reduction. This technology utilizes high-Vth sleep transistors to reduce sub threshold leakage currents during the standby mode of CMOS VLSI Circuits. The performance of MTCMOS circuits strongly depends on the size of the sleep transistors and the parasitics on the virtual ground network. Given a placed net list of a row-based MTCMOS design and the number of sleep transistor cells on each standard cell row, this paper introduces an optimal algorithm for linearly placing the allocated sleep transistors on each standard cell row so as to minimize the performance degradation of the MTCMOS circuit, which is in part due to unwanted voltage drops on its virtual ground network. Experimental results show that, compared to existing methods of placing the sleep transistors on cell rows, the proposed technique results in up to 11% reduction in the critical path delay of the circuit.", "keywords": "mtcmos;leakage minimization;placement", "title": "sleep transistor distribution in row-based mtcmos designs"} {"abstract": "One of the key problems in machine learning theory and practice is setting the correct value of the regularization parameter; this is particularly crucial in Kernel Machines such as Support Vector Machines, Regularized Least Square or Neural Networks with Weight Decay terms. Well known methods such as Leave-One-Out (or GCV) and Evidence Maximization offer a way of predicting the regularization parameter. This work points out the failure of these methods for predicting the regularization parameter when coping with the, apparently trivial and here introduced, regularized mean problem; this is the simplest form of Tikhonov regularization, that, in turn, is the primal form of the learning algorithm Regularized Least Squares. This controlled environment gives the possibility to define oracular notions of regularization and to experiment new methodologies for predicting the regularization parameter that can be extended to the more general regression case. The analysis stems from JamesStein theory, shows the equivalence of shrinking and regularization and is carried using multiple kernels learning for regression and SVD analysis; a mean value estimator is built, first via a rational function and secondly via a balanced neural network architecture suitable for estimating statistical quantities and gaining symmetric expectations. The obtained results show that a non-linear analysis of the sample and a non-linear estimation of the mean obtained by neural networks can be profitably used to improve the accuracy of mean value estimations, especially when a small number of realizations is provided.", "keywords": "model selection;regularization;mean problem;back-propagation;multiple kernel learning;jamesstein theory;svd;shrinkage", "title": "Learning the mean: A neural network approach"} {"abstract": "Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers. Our paper introduces an effective algorithm to detect such clusters, and we perform tests on several real and synthetic data sets to show its effectiveness.", "keywords": "collaborative-filtering;applications;analysis;definition;express;concept;object;browse;general;response;group;model;paper;gene regulatory network;exploration;coherence;pattern;distance;values;custom;process;data;discoveries;similarity;algorithm;connection;effect;microarray;class;cluster", "title": "clustering by pattern similarity in large data sets"} {"abstract": "Underplatform friction dampers are commonly used to control the vibration level of turbine blades in order to prevent high-cycle fatigue failures. Experimental validation of highly non-linear response predictions obtained from FEM bladed disk models incorporating underplatform dampers models has proved to be very difficult so as the assessment of the performance of a chosen design. In this paper, the effect of wedge-shaped underplatform dampers on the dynamics of a simple bladed disk under rotating conditions is measured and the effect of the excitation level on the UPDs performances is investigated at different number of the engine order excitation nearby resonance frequencies of the 1st blade bending modes of the system. The measurements are performed with an improved configuration of a rotating test rig, designed with a non-contact magnetic excitation and a non-contact rotating SLDV measurement system.", "keywords": "friction damping;laser vibrometry;bladed disks;experimental mechanics;non-linear dynamics;underplatform dampers", "title": "Measuring the performance of underplatform dampers for turbine blades by rotating laser Doppler Vibrometer"} {"abstract": "Inelastic failure analysis of concrete structures has been one of the central issues in concrete mechanics. Especially, the effect of confinement has been of great importance to capture the transition from brittle to ductile fracture of concrete under triaxial loading scenarios. Moreover, it has been a challenge to implement numerically material descriptions, which are susceptible to loss of stability and localization. In this article, a novel triaxial concrete model is presented, which captures the full spectrum of triaxial stress and strain histories in reinforced concrete structures. Thereby, inelastic dilatation is controlled by a non-associated flow rule to attain realistic predictions of inelastic volume change at various confinement levels. Different features of distributed and localized failure of the concrete model are examined under confined compression, uniaxial tension, pure shear, and simple shear. The performance at the structural level is illustrated with the example of a reinforced concrete column subjected to combined axial and transverse loading.", "keywords": "triaxial concrete model;elasto-plastic hardening/softening;localization properties in tension;compression and shear;r/c column subject to axial loading and shearing", "title": "Failure analysis of R/C columns using a triaxial concrete model"} {"abstract": "Recent research has proposed efficient protocols for distributed triggers, which can be used in monitoring infrastructures to maintain system-wide invariants and detect abnormal events with minimal communication overhead. To date, however, this work has been limited to simple thresholds on distributed aggregate functions like sums and counts. In this paper, we present our initial results that show how to use these simple threshold triggers to enable sophisticated anomaly detection in near-real time, with modest communication overheads. We design a distributed protocol to detect \"unusual traffic patterns\" buried in an Origin-Destination network flow matrix that: a) uses a Principal Components Analysis decomposition technique to detect anomalies via a threshold function on residual signals [10]; and b) efficiently tracks this threshold function in near-real time using a simple distributed protocol. In addition, we speculate that such simple thresholding can be a powerful tool for a variety of monitoring tasks beyond the one presented here, and we propose an agenda to explore additional sophisticated applications.", "keywords": "distributed triggers;anomaly detection;pca", "title": "toward sophisticated detection with distributed triggers"} {"abstract": "Formalises A+I A + I : an assembly language extended with protected module architectures an isolation mechanism found in emerging processors. Presents two trace semantics for A+I A + I programs and proves that both are fully abstract w.r.t. the operational semantics. Details which problems arise when considering readout and writeout labels in the trace semantics of A+I A + I programs.", "keywords": "fully abstract semantics;trace semantics;untyped assembly language;protected modules architectures;formal languages", "title": "Fully abstract trace semantics for protected module architectures"} {"abstract": "One-dimensional nuclear magnetic resonance (1D NMR) logging technology has some significant limitations in fluid typing. However, not only can two-dimensional nuclear magnetic resonance (2D NMR) provide some accurate porosity parameters, but it can also identify fluids more accurately than 1D NMR. In this paper, based on the relaxation mechanism of (T2, D) 2D NMR in a gradient magnetic field, a hybrid inversion method that combines least-squares-based QR decomposition (LSQR) and truncated singular value decomposition (TSVD) is examined in the 2D NMR inversion of various fluid models. The forward modeling and inversion tests are performed in detail with different acquisition parameters, such as magnetic field gradients (G) and echo spacing (TE) groups. The simulated results are discussed and described in detail, the influence of the above-mentioned observation parameters on the inversion accuracy is investigated and analyzed, and the observation parameters in multi-TE activation are optimized. Furthermore, the hybrid inversion can be applied to quantitatively determine the fluid saturation. To study the effects of noise level on the hybrid method and inversion results, the numerical simulation experiments are performed using different signal-to-noise-ratios (SNRs), and the effect of different SNRs on fluid typing using three fluid models are discussed and analyzed in detail.", "keywords": "two-dimensional nmr logging;transverse relaxation time ;diffusion coefficient ;fluid typing", "title": "A new inversion method for (T2, D) 2D NMR logging and fluid typing"} {"abstract": "A theory of communication between autonomous agents should make testable predictions about which communicative behaviors are collaborative, and provide a framework for determining the features of a communicative situation that affect whether a behavior is collaborative. The results presented here are derived from a two-phase empirical method. First, we analyze a corpus of naturally occurring problem-solving dialogues in order to identify potentially collaborative communicative strategies. Second, we experimentally test hypotheses that arise from the corpus analysis in Design-World, an experimental environment for simulating dialogues. The results indicate that collaborative behaviors must be defined relative to the cognitive limitations of the agents and the cognitive demands of the task. The method of computational simulation provides an additional empirical basis for theories of human-computer collaboration.", "keywords": "collaboration;communication;simulation", "title": "TESTING COLLABORATIVE STRATEGIES BY COMPUTATIONAL SIMULATION - COGNITIVE AND TASK EFFECTS"} {"abstract": "Recent years have seen an increased interest in navigational services for pedestrians. To ensure that these services are successful, it is necessary to understand the information requirements of pedestrians when navigating, and in particular, what information they need and how it is used. A requirements study was undertaken to identify these information requirements within an urban navigation context. Results show that landmarks were by far the most predominant navigation cue, that distance information and street names were infrequently used, and that information is used to enable navigation decisions, but also to enhance the pedestrians confidence and trust. The implications for the design of pedestrian navigation aids are highlighted.", "keywords": "design;navigation;pedestrian;requirements;wayfinding", "title": "Pedestrian navigation aids: information requirements and design implications"} {"abstract": "The present study evaluates the cognitive representation of a kicking movement performed by a human and a humanoid robot, and how they are represented in experts and novices of soccer and robotics, respectively. To learn about the expertise-dependent development of memory structures, we compared the representation structures of soccer experts and robot experts concerning a human and humanoid robot kicking movement. We found different cognitive representation structures for both expertise groups under two different motor performance conditions (human vs. humanoid robot). In general, the expertise relies on the perceptual-motor knowledge of the human motor system. Thus, the soccer experts cognitive representation of the humanoid robot movement is dominated by their representation of the corresponding human movement. Additionally, our results suggest that robot experts, in contrast to soccer experts, access functional features of the technical system of the humanoid robot in addition to their perceptual-motor knowledge about the human motor system. Thus, their perceptual-motor and neuro-functional machine representation are integrated into a cognitive representation of the humanoid robot movement.", "keywords": "neuro-functional machine representation;perceptual-motor representation;expertise;motor system;humanoid robot;human movement", "title": "Cognitive Representation of a Complex Motor Action Executed by Different Motor Systems"} {"abstract": "Researchers claim that data in electronic patient records can be used for a variety of purposes including individual patient care, management, and resource planning for scientific research. Our objective in the project Integrated Primary Care Information (IPCI) was to assess whether the electronic patient records of Dutch general practitioners contain sufficient data to perform studies in the area of postmarketing surveillance studies, We determined the data requirements for postmarketing surveillance studies, implemented additional software in the electronic patient records of the general practitioner, developed an organization to monitor the use of data, and performed validation studies to test the quality of the data. Analysis of the data requirements showed that additional software had to be installed to collect data that is not recorded in routine practice. To avoid having to obtain informed consent from each enrolled patient, we developed IPCI as a semianonymous system: both patients and participating general practitioners are anonymous for the researchers. Under specific circumstances, the researcher can contact indirectly (through a trusted third party) the physician that made the data available. Only the treating general practitioner is able to decode the identity of his patients. A Board of Supervisors predominantly consisting of participating general practitioners monitors the use of data. Validation studies show the data can be used for postmarketing surveillance. With additional software to collect data not normally recorded in routine practice, data from electronic patient record of general practitioners can be used for postmarketing surveillance.", "keywords": "postmarketing surveillance;electronic patient record;general practitioner;validation", "title": "Postmarketing surveillance based on electronic patient records: The IPCI project"} {"abstract": "Hannenhalli and Pevzner (36th Annual Symposium on Foundations of Computer Science, Milwaukee, WI, IEEE Computer Soc. Press, Los Alamitos, CA, 1995, p. 581) gave a polynomial time algorithm for computing the minimum number of reversals, translocations, fissions, and fusions, that would transform one multichromosomal genome to another when both have the same set of genes without repeats. We fixed some problems with the construction: (1) They claim it can exhibit such a sequence of steps, but there was a gap in the construction. (2) Their construction had an asymmetry in the number of chromosomes in the two genomes, whereby forward scenarios could have fissions but not fusions. We also improved the speed by combining the algorithm with the algorithm of Bader et al. (J. Comput. Biol. 8 (5) (2001) 483) that computes reversal distances for permutations in linear time.", "keywords": "genome rearrangements;fusion;fission;translocation;reversal;inversion;breakpoint graph;genes;chromosomes;homology", "title": "Efficient algorithms for multichromosomal genome rearrangements"} {"abstract": "Using electro-oculography (EOG), two types of eye-gaze interfaces have been developed; \"EOG Pointer\" and \"EOG Switch\". The former enables a user to move a computer cursor or to control a machine using only eye-gaze, regardless of drifting signal and blinking artifacts. In contrast, the latter output an ON/OFF signal only. Although it has the least simple function, it enables every user easily to turn ON/OFF a nurse-call device or to send one bit signal to a personal computer with high stability and reliability. Since the EOG Switch was commercialized in 2003, it has been widely used among amyotrophic lateral sclerosis (ALS) patients in Japan.", "keywords": "electro-oculography ;dc amplifier;ac amplifier;eye-gaze interface", "title": "eye-gaze interfaces using electro-oculography (eog)"} {"abstract": "The development of software for dynamic simulation of electrical power systems requires a comprehensive range of complex studies, which encompasses many areas of electrical engineering as well as software engineering. This study aims at to develop an efficient strategy applied to the development of software tools for dynamic power systems simulations studies. The proposed strategy is based on the object-oriented creational pattern. This approach has the advantage of makes easy the application development process, by performing a mapping between block diagram model representation and the corresponding specialized classes. Firstly, a conceptual mapping between block diagram and the object-oriented paradigm, based on the Factory Method, is carried out. After that, some flexible strategies are presented in order to obtain an improved efficiency for the numerical routines, based on the Builder standard. This allows for the parameterization of the selected numerical integration techniques. The proposed strategy was evaluated by using a 4-generator multi-machine power system. The simulation results shown that the proposed strategy was able to provides a good power system dynamic performance.", "keywords": "design patterns;factory method;mapping;builder;object-oriented;dynamic simulation;numerical routines;power systems", "title": "Creational Object-oriented Design Pattern Applied to the Development of Software Tools for Electric Power Systems Dynamic Simulations"} {"abstract": "This paper presents a detailed study of the graph based algorithm used to generate geometric moment invariant functions. The graph based algorithm has been found to suffer from high computational complexity. One major cause of this problem is that the algorithm generates too many graphs that produce zero moment invariant functions. Hence, we propose an algorithm to determine and eliminate the zero moment invariant generating graphs and thereby generate non-zero moment invariant functions with reduced computational complexity. The correctness of the algorithm has been verified and discussed with suitable induction proofs and sample graphs. Asymptotic analysis has been presented to clearly illustrate the reduction in computational complexity achieved by the proposed algorithm. It has been found and illustrated with examples that the computational time for identifying non-zero invariants could be largely reduced with the help of our proposed algorithm.", "keywords": "computational complexity;geometric moments;image transforms;orthogonal moments;moment invariants", "title": "Algorithm for faster computation of non-zero graph based invariants"} {"abstract": "Flash images are known to suffer from several problems: saturation of nearby objects, poor illumination of distant objects, reflections of objects strongly lit by the flash and strong highlights due to the reflection of flash itself by glossy surfaces. We propose to use a flash and no-flash (ambient) image pair to produce better flash images. We present a novel gradient projection scheme based on a gradient coherence model that allows removal of reflections and highlights from flash images. We also present a brightness-ratio based algorithm that allows us to compensate for the falloff in the flash image brightness due to depth. In several practical scenarios, the quality of flash/no-flash images may be limited in terms of dynamic range. In such cases, we advocate using several images taken under different flash intensities and exposures. We analyze the flash intensity-exposure space and propose a method for adaptively sampling this space so as to minimize the number of captured images for any given scene. We present several experimental results that demonstrate the ability of our algorithms to produce improved flash images.", "keywords": "flash;reflection removal;gradient projection;flash-exposure sampling;high dynamic range imaging", "title": "Removing photography artifacts using gradient projection and flash-exposure sampling"} {"abstract": "We consider 3D interior wave propagation problems with vanishing initial and mixed boundary conditions, reformulated as a system of two boundary integral equations with retarded potentials. These latter are then set in a weak form, based on a natural energy identity satisfied by the solution of the differential problem, and discretized by the energetic Galerkin boundary element method. Numerical results are presented and discussed in order to show the stability and accuracy of the proposed technique.", "keywords": "wave propagation;boundary integral equation;energetic galerkin boundary element method", "title": "A stable 3D energetic Galerkin BEM approach for wave propagation interior problems"} {"abstract": "For extracting the characteristics a specific geographic entity, and notably a place, we propose to use dynamic Extreme Tagging Systems in combination with the classic approach of static KR models like ontologies, thesauri and gazetteers. Indeed, we argue that in local search , the what that is queried is implicitly about places. However existing knowledge representation (KR) models, such as ontologies based on logical theories, conceptual spaces, affordance or other, cannot capture in isolation all aspects of the meaning of a place. Therefore we propose to use a combination of them based on the underlying notion of differences , linked elements of meaning without commitment to any KR model. Mapping to elements of different KR models can be made later to follow the requirements of a given task, supported by a KR representation of the elements that support this task. We show the usefulness of the approach for local search by applying it to the notion of place defined as a location that supports a homogeneous affordance field , i.e. the spatial area which allows me the do a particular thing, while allowing the homogeneity of movement , meaning that the previous field is not interrupted by any boundaries.", "keywords": "local search;image schemata;ai;multi-representation;similarity;affordances;knowledge representation;wordnet;conceptual spaces;differences;extreme tagging", "title": "a differential notion of place for local search"} {"abstract": "Innovation is the creation of new idea, practice, object, or even product by an individual or company. A competitive organization needs to continuously offer new line of products and services to the market for their customers. In order to cut down their R&D costs, companies seek external or even global vendors to pursue their research and development (R&D) tasks. This paper discusses the issues related to innovation outsourcing, including uncertainty, risks, productivity and quality issues.", "keywords": "innovation;outsourcing;quality;productivity;risks", "title": "Innovation outsourcing: Risks and quality issues"} {"abstract": "What motivates firms to develop Internet-enabled interfirm communication? We draw upon the work of Alavi et al. (2005-2006) and propose that the use of the Internet in interfirm communication is influenced by a firm's firm orientation and its internal communities of practice. Based on data collected from 307 international trade firms in the Beijing area, we find that Internet-enabled interfirm communication is directly driven by internal community of practices and customer orientation, and indirectly by competitor orientation and learning orientation. The internal community of practice is affected by learning orientation and competitor orientation, but not by customer orientation. The present study contributes to the literature by providing empirical investigation on firm's strategic communications from the perspective of firm orientations, delineating how different firm orientations vary in impacting firm's strategic communications, and exploring the bridging effect of communities of practices on the influences of firm orientations on knowledge management initiatives. ", "keywords": "firm orientation;learning orientation;customer orientation;competitor orientation;community of practice;internet-enabled interfirm communication", "title": "Firm orientation, community of practice, and Internet-enabled interfirm communication: Evidence from Chinese firms"} {"abstract": "In this work a concept of index of a point of a 3-D (26, 6) digital image is defined. Basing on this concept a new characterization of the so-called simple points [1] as well as an algorithm for computing Euler characteristics of 3-D (26, 6) digital pictures is proposed. ", "keywords": "digital picture;index of a point;euler number;invariant transformation", "title": "Index of a point of 3-D digital binary image and algorithm for computing its Euler characteristic"} {"abstract": "URL shortening services (USSes), which provide short aliases to registered long URLs, have become popular owing to Twitter. Despite their popularity, researchers do not carefully consider their security problems. in this paper, we explore botnet models based on USSes to prepare for new security threats before they evolve. Specifically, we consider using USSes for alias flux to hide botnet command and control (C&C) channels. In alias flux, a botmaster obfuscates the IP addresses of his C&C servers, encodes them as URLs, and then registers them to USSes with custom aliases generated by an alias generation algorithm. Later, each bot obtains the encoded IP addresses by contacting USSes using the same algorithm. For USSes that do not support custom aliases, the botmaster can use shared alias lists instead of the shared algorithm. DNS-based botnet detection schemes cannot detect an alias flux botnet, and network-level detection and blacklisting of the fluxed aliases are difficult. We also discuss possible countermeasures to cope with these new threats and investigate operating USSes. ", "keywords": "botnet;dns;domain flux;url shortening service", "title": "Fluxing botnet command and control channels with URL shortening services"} {"abstract": "Opinion retrieval is a task of growing interest in social life and academic research, which is to find relevant and opinionate documents according to a user's query. One of the key issues is how to combine a document's opinionate score (the ranking score of to what extent it is subjective or objective) and topic relevance score. Current solutions to document ranking in opinion retrieval are generally ad-hoc linear combination, which is short of theoretical foundation and careful analysis. In this paper, we focus on lexicon-based opinion retrieval. A novel generation model that unifies topic-relevance and opinion generation by a quadratic combination is proposed in this paper. With this model, the relevance-based ranking serves as the weighting factor of the lexicon-based sentiment ranking function, which is essentially different from the popular heuristic linear combination approaches. The effect of different sentiment dictionaries is also discussed. Experimental results on TREC blog datasets show the significant effectiveness of the proposed unified model. Improvements of 28.1% and 40.3% have been obtained in terms of MAP and p@10 respectively. The conclusion is not limited to blog environment. Besides the unified generation model, another contribution is that our work demonstrates that in the opinion retrieval task, a Bayesian approach to combining multiple ranking functions is superior to using a linear combination. It is also applicable to other result re-ranking applications in similar scenario.", "keywords": "generation model;opinion generation model;opinion retrieval;topic relevance;sentiment analysis", "title": "a generation model to unify topic relevance and lexicon-based sentiment for opinion retrieval"} {"abstract": "The pulse transfer characteristic of a normal selectively doped AlxGa1?xAs/GaAs heterostructure containing deep traps in the AlxGa1?xAs layer is considered. It is shown that these deep traps are responsible for an undershoot in the drain-source current at the end of a positive voltage pulse applied to the gate (the pulse voltage is measured from the initial gate bias) and the trap depth can be determined from this undershoot.", "keywords": "selectively doped heterostucture;high electron mobility transistor;transfer characteristic;deep trap", "title": "Anomalous behavior of the pulse transfer characteristic of a selectively doped AlxGa1?xAs/GaAs heterostructure containing deep traps"} {"abstract": "Wireless Image Sensor Networks (WISNs) consisting of untethered camera nodes and sensors may be deployed in a variety of unattended and possibly hostile environments to obtain surveillance data. In such settings, the WISN nodes must perform reliable event acquisition to limit the energy, computation and delay drains associated with forwarding large volumes of image data wirelessly to a sink node. In this work we investigate the event acquisition properties of WISNs that employ various techniques at the camera nodes to distinguish between event and non-event frames in uncertain environments that may include attacks. These techniques include lightweight image processing, decisions from n sensors with/without cluster head fault and attack detection, and a combination approach relying on both lightweight image processing and sensor decisions. We analyze the relative merits and limitations of each approach in terms of the resulting probability of event detection and false alarm in the face of occasional errors, attacks and stealthy attacks.", "keywords": "image sensor networks;lightweight event acquisition;sensor network security", "title": "Wireless image sensor networks: event acquisition in attack-prone and uncertain environments"} {"abstract": "Event sequences capture system and user activity over time. Prior research on sequence mining has mostly focused on discovering local patterns appearing in a sequence. While interesting, these patterns do not give a comprehensive summary of the entire event sequence. Moreover, the number of patterns discovered can be large. In this article, we take an alternative approach and build short summaries that describe an entire sequence, and discover local dependencies between event types. We formally define the summarization problem as an optimization problem that balances shortness of the summary with accuracy of the data description. We show that this problem can be solved optimally in polynomial time by using a combination of two dynamic-programming algorithms. We also explore more efficient greedy alternatives and demonstrate that they work well on large datasets. Experiments on both synthetic and real datasets illustrate that our algorithms are efficient and produce high-quality results, and reveal interesting local structures in the data.", "keywords": "algorithms;experimentation;theory;event sequences;summarization;log mining", "title": "Constructing Comprehensive Summaries of Large Event Sequences"} {"abstract": "In this paper we show that the energy reductions obtained from using two techniques, data remapping (DR) and voltage/frequency scaling of off-chip bus and memory, combine to provide interesting trade offs between energy, execution time and power. Both methods aim to reduce the energy consumed by the memory subsystem. DR is a fully automatic compile time technique applicable to pointer-intensive dynamic applications. Voltage/frequency scaling of off-chip memory is a technique applied at the hardware level. When combined together, energy reductions can be as high as 49.45%. The improvements are verified in the context of three OLDEN pointer-centric benchmarks, namely Perimeter, Health and TSP.", "keywords": "low power;embedded systems;energy model;voltage/frequency scaling;compiler optimizations", "title": "Combining data remapping and voltage/frequency scaling of second level memory for energy reduction in embedded systems"} {"abstract": "In this paper, we identify trends about, benefits from, and barriers to performing user evaluations in software engineering research. From a corpus of over 3,000 papers spanning ten years, we report on various subtypes of user evaluations (e.g., coding tasks vs. questionnaires) and relate user evaluations to paper topics (e.g., debugging vs. technology transfer). We identify the external measures of impact, such as best paper awards and citation counts, that are correlated with the presence of user evaluations. We complement this with a survey of over 100 researchers from over 40 different universities and labs in which we identify a set of perceived barriers to performing user evaluations.", "keywords": "experimentation;human factors;human study;user evaluation", "title": "Benefits and Barriers of User Evaluation in Software Engineering Research"} {"abstract": "In the framework of the Hueckel molecular orbital (HMO) model, an analytical method has been elaborated which enables calculation of energy levels and wave functions for polymethine dye molecules with arbitrary end groups characterized by two effective additive parameters. The method represents a generalization of the known long-chain approximation (LCA) manipulating only frontier pi-MOs and yields analytical relations for molecular characteristics based on all occupied dye pi-MOs.", "keywords": "polymethine compounds;long-chain approximation;quasi-one-dimensional approximation;green's functions;atomic charges;bond orders", "title": "Quasi-one-dimensional approximation in the HMO model of polymethine dyes"} {"abstract": "Resource-oriented Services recently become an enabling technology to integrate and configure information from different heterogeneous systems so as to meet ever-changing environment which not only need the concepts for entities but also require the semantics for operations. By the aim of combining structural and operational semantics agilely, a Semantic Resource Service Model (SRSM) is proposed. Firstly, SRSM describes Entity-Oriented and Transition-Oriented Resource by semantic meta-model which contains data structures and operation semantics. Secondly, by describing structural semantics for Entity-Oriented Resource, heterogonous inputs/outputs of a service can be automatically matched. Thirdly, by describing operational semantics for Transition-Oriented Resource, the service composition sequence can be inferred after ontology reasoning. Then, both Entity-Oriented and Transition-Oriented Resources are encapsulated into composited RESTful service. At last, a case study and several comparisons are applied in a prototype system. The result shows that the proposed approach provides a flexible way for resource-oriented service composition.", "keywords": "structural semantic;operational semantic;ontology;restful service;resource-oriented architecture;entity-oriented resource;transition-oriented resource", "title": "Ontology Combined Structural and Operational Semantics for Resource-Oriented Service Composition"} {"abstract": "The quality of channel sidewalls resulting from through-wafer deep reactive-ion etching is analysed using scanning electron microscopy, atomic-force microscopy and interferometry. Sidewall quality and profile are highly dependent on the width of the etched channel. Channels narrower than 100 ?m show generally good sidewall smoothness, though with a bowed profile. This profile leads to ion-induced damage towards the bottom of the channel sidewall. Wider channels, in contrast, exhibit overpassivation of the sidewalls with a region of thick polymer build-up followed by vertical striations and a very rough surface, but with an overall vertical profile. Redeposition of the passivation from the trench bottom to the sidewalls as suggested by other researchers is supported by our observations.", "keywords": "deep reactive-ion etching;mems;fluorocarbon redeposition;sidewall morphology", "title": "Analysis of sidewall quality in through-wafer deep reactive-ion etching"} {"abstract": "A combination of modelling and analysis techniques was used to design a six component force balance. The balance was designed specifically for the measurement of impulsive aerodynamic forces and moments characteristic of hypervelocity shock tunnel testing using the stress wave force measurement technique. Aerodynamic modelling was used to estimate the magnitude and distribution of forces and finite element modelling to determine the mechanical response of proposed balance designs. Simulation of balance performance was based on aerodynamic loads and mechanical responses using convolution techniques. Deconvolution was then used to assess balance performance and to guide further design modifications leading to the final balance design.", "keywords": "force balance design;force measurement;finite element modelling;deconvolution;shock tunnel;hypersonic", "title": "Design, modelling and analysis of a six component force balance for hypervelocity wind tunnel testing"} {"abstract": "This article focuses on business process engineering, especially on alignment between business analysis and implementation. Through a business process management approach, different transformations interfere with process models in order to make them executable. To keep the consistency of process model from business model to IT model, we propose a pivotal metamodel-centric methodology. It aims at keeping or giving all requisite structural and semantic data needed to perform such transformations without loss of information. Through this we can ensure the alignment between business and IT. This article describes the concept of pivotal metamodel and proposes a methodology using such an approach. In addition, we present an example and the resulting benefits.", "keywords": "business process engineering;metamodelling;transformation;alignment", "title": "Towards a pivotal-based approach for business process alignment"} {"abstract": "Relational classification aims at including relations among entities into the classification process, for example taking relations among documents such as common authors or citations into account. However, considering more than one relation can further improve classification accuracy. Here we introduce a new approach to make use of several relations as well as both, relations and local attributes for classification using ensemble methods. To accomplish this, we present a generic relational ensemble model that can use different relational and local classifiers as components. Furthermore, we discuss solutions for several problems concerning relational data such as heterogeneity, sparsity, and multiple relations. Especially the sparsity problem will be discussed in more detail. We introduce a new method called PRNMultiHop that tries to handle this problem. Furthermore we categorize relational methods in a systematic way. Finally, we provide empirical evidence, that our relational ensemble methods outperform existing relational classification methods, even rather complex models such as relational probability trees (RPTs), relational dependency networks (RDNs) and relational Bayesian classifiers (RBCs).", "keywords": "relational data mining;ensemble classification;sparse graphs;relational autocorrelation", "title": "Ensembles of relational classifiers"} {"abstract": "New acquisition methods have increased the availability of surface property data that capture location-dependent data on feature surfaces. However, these data are not supported as fully in the geovisualization of the Digital City as established data categories such as feature attributes, 2D rasters, or geometry. Consequently, 3D surface properties are largely excluded from the information extraction and knowledge creation process of geovisualization despite their potential for being an effective tool in many such tasks. To overcome this situation, this paper examines the benefits of a better integration into geovisualization systems in terms of two examples and discusses technological foundations for surface property support. The main contribution is the identification of computer graphics techniques as a suitable basis for such support. This way, the processing of surface property data fits well into existing visualization systems. This finding is demonstrated through an interactive prototypic visualization system that extends an existing system with surface property support. While this prototype concentrates on technology and neglects user-related and task-related aspects, the paper includes a discussion on challenges for making surface properties accessible to a wider audience.", "keywords": "geovisualization;exploratory data analysis;3d surface properties;textures;computer graphics;gpu", "title": "3D feature surface properties and their application in geovisualization"} {"abstract": "We have investigated an optimum form of the modified icosahedral grid that is generated by applying the spring dynamics to the standard icosahedral grid System. The spring dynamics can generate a more homogeneous grid system than the standard icosahedral grid system by tuning the natural spring lenght: as the natural spring length becomes longer, the ratio of maximum grid interval to minimum one becomes closer to unit. When the natural spring length is larger than a critical value, however, the spring dynamic system does not have a stable equilibrium. By setting the natural spring length to be the marginally critical value, we can obtain the most homogeneous grid system, which is most efficient in terms of the CFL condition. We have analyzed eigenmodes involved in the initial error of the geostrophic balance problem [test case 2 of D. L. Williamson et al. (1992, J. Comput. Phys. 102, 211)]. Since the balance state in the discrete system differs slightly from the exact solution of the analytic system, the initial error field includes both the gravity wave mode and the Rossby wave mode. As the results of the analysis are based on Hough harmonics decompositions, we detected Rossby and gravity wave modes with zonal wavenumber 5, which are asymmetric against the equator. These errors are associated with icosahedral grid structure. The symmetric gravity wave mode with zonal wavenumber 0 also appears in the error field. To clarify the evolution of Rossby waves, we introduce divergence damping to reduce the gravity wave mode. From the simulated results of the geostrophic problem with various grid systems, we found that the spuriously generated Rossby wave mode is eliminated most effectively when the most homogeneously distributed grid system is used. It is therefore, concluded that the most homogeneous grid system is the best choice from the viewpoint of numerical accuracy as well as computational efficiency. ", "keywords": "shallow water model;icosahedral grid;spring dynamics;climate model", "title": "An optimization of the icosahedral grid modified by spring dynamics"} {"abstract": "We describe the design and implementation of the Glue-Nail database system. The Nail language is a purely declarative query language; Glue is a procedural language used for non-query activities. The two languages combined are sufficient to write a complete application. Nail and Glue code both compile into the target language IGlue. The Nail compiler uses variants of the magic sets algorithm, and supports well-founded models. Static optimization is performed by the Glue compiler using techniques that include peephole methods and data flow analysis. The IGlue code is executed by the IGlue interpreter, which features a run-time adaptive optimizer. The three optimizers each deal with separate optimization domains, and experiments indicate that an effective synergism is achieved. The Glue-Nail system is largely complete and has been tested using a suite of representative applications.", "keywords": "activation;applications;design;domain;experience;timing;model;writing;data flow analysis;method;adapt;systems;optimality;procedure;interpretation;code;language;implementation;compilation;algorithm;feature;effect;database;completeness;query", "title": "design and implementation of the glue-nail database system"} {"abstract": "A network for the detection of an approaching object with simple-shape recognition is proposed based on lower animal vision. The locust can detect an approaching object through a simple process in the descending contralateral movement detector (DCMD) in the locust brain, by which the approach velocity and direction of the object is determined. The frog can recognize simple shapes through a simple process in the tectum and thalamus in the frog brain. The proposed network is constructed of simple analog complementary metal oxide semiconductor (CMOS) circuits. The integrated circuit of the proposed network is fabricated with the 1.2 mu m CMOS process. Measured results for the proposed circuit indicate that the approach velocity and direction of an object can be detected by the output current of the analog circuit based on the DCMD response. The shape of moving objects having simple shapes, such as circles, squares, triangles and rectangles, was recognized using the proposed frog-visual-system-based circuit.", "keywords": "analog integrated circuit;edge detection;motion sensor;shape recognition;vision chip", "title": "Analog integrated circuit for detection of an approaching object with simple-shape recognition based on lower animal vision"} {"abstract": "Objectives To compare the manifestations, mechanisms, and rates of system-related errors associated with two electronic prescribing systems (e-PS). To determine if the rate of system-related prescribing errors is greater than the rate of errors prevented. Methods Audit of 629 inpatient admissions at two hospitals in Sydney, Australia using the CSC MedChart and Cerner Millennium e-PS. System related errors were classified by manifestation (eg, wrong dose), mechanism, and severity. A mechanism typology comprised errors made: selecting items from drop-down menus; constructing orders; editing orders; or failing to complete new e-PS tasks. Proportions and rates of errors by manifestation, mechanism, and e-PS were calculated. Results 42.4% (n=493) of 1164 prescribing errors were system-related (78/100 admissions). This result did not differ by e-PS (MedChart 42.6% (95% CI 39.1 to 46.1); Cerner 41.9% (37.1 to 46.8)). For 13.4% (n=66) of system-related errors there was evidence that the error was detected prior to study audit. 27.4% (n=135) of system-related errors manifested as timing errors and 22.5% (n=111) wrong drug strength errors. Selection errors accounted for 43.4% (34.2/100 admissions), editing errors 21.1% (16.5/100 admissions), and failure to complete new e-PS tasks 32.0% (32.0/100 admissions). MedChart generated more selection errors (OR=4.17; p=0.00002) but fewer new task failures (OR=0.37; p=0.003) relative to the Cerner e-PS. The two systems prevented significantly more errors than they generated (220/100 admissions (95% CI 180 to 261) vs 78 (95% CI 66 to 91)). Conclusions System-related errors are frequent, yet few are detected. e-PS require new tasks of prescribers, creating additional cognitive load and error opportunities. Dual classification, by manifestation and mechanism, allowed identification of design features which increase risk and potential solutions. e-PS designs with fewer drop-down menu selections may reduce error risk.", "keywords": "cpoe;prescribing errors;unintended consequences;information technology;clinical information systems", "title": "The safety of electronic prescribing: manifestations, mechanisms, and rates of system-related errors associated with two commercial systems in hospitals"} {"abstract": "We develop in this paper a theoretical framework for the topological study of time series data. Broadly speaking, we describe geometrical and topological properties of sliding window embeddings, as seen through the lens of persistent homology. In particular, we show that maximum persistence at the point-cloud level can be used to quantify periodicity at the signal level, prove structural and convergence theorems for the resulting persistence diagrams, and derive estimates for their dependency on window size and embedding dimension. We apply this methodology to quantifying periodicity in synthetic data sets and compare the results with those obtained using state-of-the-art methods in gene expression analysis. We call this new method SW1PerS, which stands for Sliding Windows and 1-Dimensional Persistence Scoring.", "keywords": "persistent homology;time-delay embeddings;periodicity;primary secondary ", "title": "Sliding Windows and Persistence: An Application of Topological Methods to Signal Analysis"} {"abstract": "Distributed power control is an important issue in wireless networks. Recently, noncooperative game theory has been applied to investigate interesting solutions to this problem. The majority of these studies assumes that the transmitter power level can take values in a continuous domain. However, recent trends such as the GSM standard and Qualcomm's proposal to the IS-95 standard use a finite number of discretized power levels. This motivates the need to investigate solutions for distributed discrete power control which is the primary objective of this paper. We first note that, by simply discretizing, the previously proposed continuous power adaptation techniques will not suffice. This is because a simple discretization does not guarantee convergence and uniqueness. We propose two probabilistic power adaptation algorithms and analyze their theoretical properties along with the numerical behavior. The distributed discrete power control problem is formulated as an N-person, nonzero sum game. In this game, each user evaluates a power strategy by computing a utility value. This evaluation is performed using a stochastic iterative procedures. We approximate the discrete power control iterations by an equivalent ordinary differential equation to prove that the proposed stochastic learning power control algorithm converges to a stable Nash equilibrium. Conditions when more than one stable Nash equilibrium or even only mixed equilibrium may exist are also studied. Experimental results are presented for several cases and compared with the continuous power level adaptation solutions.", "keywords": "game theory;power control;stochastic learning;wireless networking", "title": "Stochastic learning solution for distributed discrete power control game in wireless data networks"} {"abstract": "Interconnect defects such as weak resistive opens, shorts, and bridges increase the path delay affected by a pattern during manufacturing test but are not significant enough to cause a failure at functional frequency. In this paper, a new faster-than-at-speed method is presented for delay test pattern application to screen small delay defects. Given a test pattern set, the technique groups the patterns into multiple subsets with close path delay distribution and determines an optimal test frequency considering both positive slack and performance degradation due to IR-drop effects. Since, the technique does not increase the test frequency to an extent that any paths exercised at the rated functional frequency may fail, it avoids any scan flip-flop masking. As most semiconductor companies currently deploy compression technologies to reduce test costs, scan-cell masking is highly undesirable for pattern modification as it would imply pattern count increase and might result in pattern regeneration. Therefore, our solution is more practical as the test engineer can run the same pattern set without any changes to the test flow other than the at-speed test frequency.", "keywords": "delay test;supply noise;test generation", "title": "A Novel Faster-Than-at-Speed Transition-Delay Test Method Considering IR-Drop Effects"} {"abstract": "The paper proposed a Fuzzy Multiple Criteria Decision Making (FMCDM) approach for banking performance evaluation. Drawing on the four perspectives of a Balanced Scorecard (BSC), this research first summarized the evaluation indexes synthesized from the literature relating to banking performance. Then, for screening these indexes, 23 indexes fit for banking performance evaluation were selected through expert questionnaires. Furthermore, the relative weights of the chosen evaluation indexes were calculated by Fuzzy Analytic Hierarchy Process (FAHP). And the three MCDM analytical tools of SAW, TOPSIS, and VIKOR were respectively adopted to rank the banking performance and improve the gaps with three banks as an empirical example. The analysis results highlight the critical aspects of evaluation criteria as well as the gaps to improve banking performance for achieving aspired/desired level. It shows that the proposed FMCDM evaluation model of banking performance using the BSC framework can be a useful and effective assessment tool.", "keywords": "fmcdm;balance scorecard ;fuzzy analytic hierarchy process ;topsis;vikor", "title": "A fuzzy MCDM approach for evaluating banking performance based on Balanced Scorecard"} {"abstract": "Since the 1970s the field of Geographical Information Systems (GIS) has evolved into a mature research and application area involving a number of academic fields including Geography, Civil Engineering, Computer Science, Land Use Planning, and Environmental Science. GIS can support a wide range of spatial queries that can be used to support location studies. GIS will play a significant role in future location model development and application. We review existing work that forms the interface between GIS and Location Science and discuss some of the potential research areas involving both GIS and Location Science. During the past 30 years there have been many developments in spatial data analysis, spatial data storage and retrieval, and mapping. Many of these developments have occurred in the field of Geographical Information Science. Geographical Information Systems software now supports many elementary and advanced spatial analytic approaches including the production of high quality maps. GIS will have a major impact on the field of Location Science in terms of model application and model development. The purpose of this paper is to explore the interface between the field of Location Science and GIS.", "keywords": "geographical information systems ;geographical information science;facility location;site selection", "title": "Geographical information systems and location science"} {"abstract": "In silica techniques involving the development of quantitative regression models have been extensively used for prediction of activity, property and toxicity of new chemicals. The acceptability and subsequent applicability of the models for predictions is determined based on several internal and external validation statistics. Among different validation metrics, Q(2) and R-pred(2) represent the classical metrics for internal validation and external validation respectively. Additionally, the r(m)(2) metrics introduced by Roy and coworkers have been widely used by several groups of authors to ensure the close agreement of the predicted response data with the observed ones. However, none of the currently available and commonly used validation metrics provides any information regarding the rank-order predictions for the test set. Thus, to incorporate the concept of ranking order predictions while calculating the common validation metrics originally using the Pearson's correlation coefficient-based algorithm, the new r(m(rank))(2) metric has been introduced in this work as a new variant of the r(m)(2) series of metrics. The ability of this new metric to perform the rank-order prediction is determined based on its application in judging the quality of predictions of regression - based quantitative structure-activity/property relationship (QSAR/QSPR) models for four different data sets. The different validation metrics calculated in each case were compared for their ability to reflect the rank-order predictions based on their correlation with the conventional Spearman's rank correlation coefficient. Based on the results of the sum of ranking differences analysis performed using the Spearman's rank correlation coefficient as the reference, it was observed that the (2)(m(rank)) metric exhibited the least difference in ranking from that of the reference metric. Thus, the close correlation of the (2)(m(rank)) metric with the Spearman's rank correlation coefficient inferred that the new metric could aptly perform the rank-order prediction for the test data set and can be utilized as an additional validation tool, besides the conventional metrics, for assessing the acceptability and predictive ability of a QSAR/QSPR model. ", "keywords": "qsar;qspr;qstr;validation;pearson's correlation coefficient;spearman's rank correlation coefficient", "title": "Introduction of r(m(rank))(2) metric incorporating rank-order predictions as an additional tool for validation of QSAR/QSPR models"} {"abstract": "Drawing upon and distinguishing themselves from domestic, public, work, and natural settings, homeless communities offer new cultural frontiers into which ubiquitous computing could diffuse. We report on one such frontier, a community of homeless young people, located in Seattle, WA, seeking both to foresee the consequences of pervasive access to digital media and communications and to prepare for its seemingly inevitable uptake. The community consists of hundreds of young people living without stable housing, often in the public, and an alliance of nine service agencies that seek to stabilize youth and equip them to escape homelessness. We examine the opportunities for ubiquitous computing in this community by, in part, developing a precautionary stance on intervention. This stance is then used to critically examine a scenario in which information about the service agencies is made public. From this scenario, and a description of the social and material constraints of this community, we argue that precaution offers productive counsel on decisions on whether and how to intervene with ubiquitous computing. A precautionary point of view is especially important as ubiquitous computing diffuses into communities that, by their social and material conditions, are vulnerable. In such communities, the active avoidance of harms and plans for their mitigation is particularly important.", "keywords": "homelessness;poverty;youth;community informatics;non-profit service agencies;precautionary principle;designer value;value sensitive design;value scenario;envisioning", "title": "Designing ubiquitous information systems for a community of homeless young people: precaution and a way forward"} {"abstract": "The paper analyses the linear programming problem with fuzzy coefficients in the objective function. The set of nondominated (ND) solutions with respect to an assumed fuzzy preference relation, according to Orlovsky's concept, is supposed to be the solution of the problem. Special attention is paid to unfuzzy nondominated (UND) solutions (the solutions which are nondominated to the degree one). The main results of the paper are sufficient conditions on a fuzzy preference relation allowing to reduce the problem of determining UND solutions to that of determining the optimal solutions of a classical linear programming problem. These solutions can thus be determined by means of classical linear programming methods.", "keywords": "fuzzy programming;linear programming;fuzzy relation;nondominated solution", "title": "On the equivalence of two optimization methods for fuzzy linear programming problems"} {"abstract": "During program maintenance, a programmer may make changes that enhance program functionality or fix bugs in code. Then, the programmer usually will run unit/regression tests to prevent invalidation of previously tested functionality. If a test fails unexpectedly, the programmer needs to explore the edit to find the failure-inducing changes for that test. Crisp uses results from Chianti, a tool that performs semantic change impact analysis [ 1], to allow the programmer to examine those parts of the edit that affect the failing test. Crisp then builds a compilable intermediate version of the program by adding a programmer-selected partial edit to the original code, augmenting the selection as necessary to ensure compilation. The programmer can reexecute the test on the intermediate version in order to locate the exact reasons for the failure by concentrating on the specific changes that were applied. In nine initial case studies on pairs of versions from two real Java programs, Daikon [ 2] and Eclipse jdt compiler [ 3], we were able to use Crisp to identify the failure-inducing changes for all but 1 of 68 failing tests. On average, 33 changes were found to affect each failing test ( of the 67), but only 1-4 of these changes were found to be actually failure-inducing.", "keywords": "fault localization;semantic change impact analysis;edit change dependence;regression testing;intermediate versions of programs", "title": "Identifying failure causes in Java programs: An application of change impact analysis"} {"abstract": "In this paper, an asymmetric Generalized Stewart-Gough Platform (GSP) type parallel manipulator is designed by considering the type synthesis approach. The asymmetric six-Degree Of Freedom (DOE) manipulator optimized in this paper is selected among the GSPs classified under the name of 6D. The dexterous workspace optimization of Asymmetric parallel Manipulator with tEn Different Linear Actuator Lengths (AMEDIAL) subject to kinematics and geometric constraints is performed by using the Particle Swarm Optimization (PSO). The condition number and Minimum Singular Value (MSV) of homogenized Jacobian matrix are employed to obtain the dexterous workspace of AMEDLAL. Finally, the six-DOF AMEDLAL is also compared with the optimized Traditional Stewart-Gough Platform Manipulator (TSPM) considering the volume of the dexterous workspace in order to demonstrate its kinematic performance. Comparisons show that the manipulator proposed in this study illustrates better kinematic performance than TSPM. ", "keywords": "type synthesis;singular values;dexterous workspace;pso and gsp", "title": "Dexterous workspace optimization of an asymmetric six-degree of freedom Stewart-Gough platform type manipulator"} {"abstract": "A finite iteration method for solving systems of (max, min)-linear equations is presented. The systems have variables on both sides of the equations. The algorithm has polynomial complexity and may be extended to wider classes of equations with a similar structure.", "keywords": "-linear equations;two-sided system", "title": "SOLVING SYSTEMS OF TWO-SIDED (MAX, MIN)-LINEAR EQUATIONS"} {"abstract": "This paper presents an expression of the semantic proximity. Based on the temporal data model, a method of the temporal approximation is given. Using these concepts, this paper provides an evaluated method of fuzzy and dynamic association degree with delayed time and a superposition method of association degrees. Particularly, by means of the fuzzy and dynamic association degree, the connection between the weather data of two regions can be discovered.", "keywords": "temporal data model;fuzzy association degree;delayed time;weather forecast", "title": "Fuzzy association degree with delayed time in temporal data model"} {"abstract": "In this paper, a new attempt has been made in the area of tool-based micromachining for automated, non-contact, and flexible prediction of quality responses such as average surface roughness (R (a)), tool wear ratio (TWR) and metal removal rate (MRR) of micro-turned miniaturized parts through a machine vision system (MVS) which is integrated with an adaptive neuro-fuzzy inference system (ANFIS). The images of machined surface grabbed by the MVS could be extracted using the algorithm developed in this work, to get the features of image texture [average gray level (G (a))]. This work presents an area-based surface characterization technique which applies the basic light scattering principles used in other optimal optical measurement systems. These principles are applied in a novel fashion which is especially suitable for in-process prediction and control. The main objective of this study is to design an ANFIS for estimation of R (a), TWR, and MRR in micro-turning process. Cutting speed (S), feed rate (F), depth of cut (D), G (a) were taken as input parameters and R (a), TWR, MRR as the output parameters. The results obtained from the ANFIS model were compared with experimental values. It is found that the predicted values of the responses are in good agreement with the experimental values.", "keywords": "micro-turning;machine vision;anfis;surface roughness;twr;mrr", "title": "On-line prediction of micro-turning multi-response variables by machine vision system using adaptive neuro-fuzzy inference system (ANFIS)"} {"abstract": "We report on effective prevention of GaAs corrosion in a cell culture liquid environment by means of polymerized (3-mercaptopropyl)-trimethoxysilane thin film coatings. Aging in physiological solution kept at 37C revealed no significant oxidation after 2weeks, which is the typical period of incubation of a neuron cells culture. The method was also applied to High Electron Mobility Transistors (HEMT) arrays with unmetallized gate regions, in view of their application as neural signal transducers. Significant reduction of the degradation of the HEMT behavior was obtained, as compared to uncoated HEMTs, with good channel modulation efficiency still after 30days aging.", "keywords": "gaas;surface passivation;hemt;biosensor", "title": "Gallium arsenide passivation method for the employment of High Electron Mobility Transistors in liquid environment"} {"abstract": "Our recent work has described how to use feature and topology in-formation to compare 3-D solid models. In this work we describe a new method to compare solid models based on shape distributions. Shape distribution functions are common in the computer graphics and computer vision communities. The typical use of shape dis-tributions is to compare 2-D objects, such as those obtained from imaging devices (cameras and other computer vision equipment). Recent work has applied shape distribution metrics for compari-son of approximate models found in the graphics community, such as polygonal meshes, faceted representation, and Virtual Reality Modeling Language (VRML) models. This paper examines how to adapt these techniques to comparison of 3-D solid models, such as those produced by commercial CAD systems. We provide a brief review of shape matching with distribution functions and present an approach to matching solid models. First, we show how to ex-tend basic distribution-based techniques to handle CAD data that has been exported to VRML format. These extensions address specific geometries that occur in mechanical CAD data. Second, we describe how to use shape distributions to directly interrogate solid models. Lastly, we show how these techniques can be put together to provide a \"query by example\" interface to a large, het-erogeneous, CAD database: The National Design Repository. One significant contribution of our work is the systematic technique for performing consistent, engineering content-based comparisons of CAD models produced by different CAD systems.", "keywords": "mesh;use;solid model databases;3d search;communities;engine;modelling language;approximation;design;metrication;addressing;object;topologies;camera;interfaces;extensibility;imaging;graphics; virtual reality ;paper;shape;model;representation;shape matching;computer vision;comparisons;computer graphics;review;shape recognition;contention;query-by-example;method;systems;solid modeling;device;matching;data;consistency;distributed;repositories;feature;database", "title": "using shape distributions to compare solid models"} {"abstract": "Organic Computing has similar characteristics of organism which can be self-adjustment for a variety of conditions. Moreover, during the wireless communication technological evolution progress, WiMAX (Worldwide Interoperability for Microwave Access) offers ability of high capacity and far distance transmission. WiMAX provides high-speed access and a coverage range across several kilometers, but the actual coverage range was merely a few kilometers due to the shelter of buildings or terrain. IEEE 802.16 working group designed 802.16j-based RS (Relay Station) to overcome above problem. In this paper, we present a mechanism called Self-Optimization Handover Mechanism. This mechanism is using GPS (Global Positioning System) navigation system to gather the related information for the position and combine the mobility characteristics of Mobile Relay Station. Especially, the concept of Self-Optimization of Organic Computing has been integrated into this mechanism. There are some advantages for this new mechanism, including: (1)The base station can provide advance plan and select the path. (2)The mechanism can reduce the number of possible handover and hop. (3)The mechanism can reduce the time of channel scan.", "keywords": "802.16j;handover;mobile relay station;gps navigation;organic computing", "title": "Navigation-based self-optimization handover mechanism for mobile relay stations in WiMAX networks"} {"abstract": "This paper presents new evolutionary computation algorithms for a problem of wind farm design. The algorithms tackle two different problems of offshore wind farm layout. Experiments in a real offshore wind farm layout case are shown and discussed.", "keywords": "offshore wind farm design;optimal layouts;evolutionary computation;real case study", "title": "Evolutionary computation approaches for real offshore wind farm layout: A case study in northern Europe"} {"abstract": "Due to the increasing complexity of current digital data, similarity search has become a fundamental computational task in many applications. Unfortunately, its costs are still high and grow linearly on single server structures, which prevents them from efficient application on large data volumes. In this paper, we shortly describe four recent scalable distributed techniques for similarity search and study their performance in executing queries on three different datasets. Though all the methods employ parallelism to speed up query execution, different advantages for different objectives have been identified by experiments. The reported results would be helpful for choosing the best implementations for specific applications. They can also be used for designing new and better indexing structures in the future.", "keywords": "similarity search;scalability;metric space;distributed index structures;peer-to-peer networks", "title": "Scalability comparison of Peer-to-Peer similarity search structures"} {"abstract": "This paper presents a new, fast Modified Recursive GaussNewton (MRGN) method for the estimation of power quality indices in distributed generating systems during both islanding and non-islanding conditions. A forgetting factor weighted error cost function is minimized by the well known GaussNewton algorithm and the resulting Hessian matrix is approximated by ignoring the off-diagonal terms. This simplification produces a decoupled algorithm, for the fundamental and harmonic components and results in a large reduction of computational effort, when the power signal contains a large number of harmonics. Numerical experiments have shown that the proposed approach results in higher speed of convergence, accurate tracking of power signal parameters in the presence noise, waveform distortion, etc., which are suitable for the estimation of power quality indices. In the case of a distribution network, power islands occur when power supply from the main utility is interrupted due to faults or otherwise and the distributed generation system (DG) keeps supplying power into the network. Further, due to unbalanced load conditions the DG is subject to unbalanced voltages at its terminals and suffers from increased total harmonic distortion (THD). Thus, the power quality indices estimation, along with the power system frequency estimation will play a vital role in detecting power islands in distributed generating systems. Extensive studies, both on simulated and real, benchmark hybrid distribution networks, involving distributed generation systems reveal the effectiveness of the proposed approach to calculate the power quality indices accurately.", "keywords": "power quality indices;total harmonic distortion;sequence voltages and currents;power measurements;distributed generation;islanding condition", "title": "Estimation of power quality indices in distributed generation systems during power islanding conditions"} {"abstract": "We developed a high-throughput screening system that allows identification of genes prolonging life span in the budding yeast Saccharomyces cerevisiae. The method is based on isolating yeast mother cells with an extended number of cell divisions as indicated by the increased number of bud scars on their surface. Fluorescently labeled wheat germ agglutinin (WGA) was used for specific staining of bud scars. Screening of a human HepG2 cDNA expression library in yeast resulted in the isolation of several yeast transformants with a potentially prolonged life span. The budding yeast S. cerevisiae, one of the favorite models used to study aging, has been studied extensively for the better understanding of the mechanisms of human aging. Because human disease genes often have yeast counterparts, they can be studied efficiently in this organism. One interesting example is the WRN gene, the human DNA helicase, which participates in the DNA repair pathway. The mutation of the WRN gene causes Werner syndrome showing premature-aging phenotype. Budding yeast contains WRN homologue, SGS1, and its mutation results in shortening yeast life span. The knowledge gained from the studies of budding yeast will benefit studies in humans for better understanding of aging and aging-related disease.", "keywords": "aging;budding yeast;wga;bud scar;life span", "title": "The Bud Scar-Based Screening System for Hunting Human Genes Extending Life Span"} {"abstract": "While most models of location decisions of firms are based on the principle of utility maximizing behavior, the present study assumes that location decisions are just part of business cycle models, in which location is considered along other business decisions. The business model results in a series of location requirements and these are matched against location characteristics. Given this theoretical perspective, the modeling challenge then becomes how to find the match between firm types and the set of location characteristics using observations of the spatial distribution of firms. In this paper, several Bayesian classifier networks are compared in terms of their performance, using a large data set collected for the Netherlands. Results demonstrate that by taking relationships between predictor variables into account the Bayesian classifiers can improve prediction accuracy compared to commonly used decision tree. From a substantive point of view, our results indicate that different sets of urban characteristics and accessibility requirements are relevant to different office types as reflected in the spatial distribution of these office firms.", "keywords": "office location;bayesian classifier networks;decision trees;luti models", "title": "Matching office firms types and location characteristics: An exploratory analysis using Bayesian classifier networks"} {"abstract": "The work outlined here was inspired by [1, 3], where the authors analyze the mental models of recursion by looking at how students trace simple recursive computations. Besides trying to understand if their results generalize to a different context, I was interested to see the correlations between the mental models of the computation process and the ability to establish recursive relationships in the problem domain. My investigation essentially lends further support to the findings of [3]. However, a consistent mental model of recursive computations, although implied by the ability to use recursion in problem-solving, does not seem to be sufficient for the achievement of this higher-level skill.", "keywords": "mental models;programming learning;recursion", "title": "mental models of recursive computations vs. recursive analysis in the problem domain"} {"abstract": "We present a mobile multi-touch interface for selecting, querying, and visually exploring data visualized on large, high-resolution displays. Although emerging large (e.g., ?10m wide), high-resolution displays provide great potential for visualizing dense, complex datasets, their utility is often limited by a fundamental interaction problem the need to interact with data from multiple positions around a large room. Our solution is a selection and querying interface that combines a hand-held multi-touch device with 6 degree-of-freedom tracking in the physical space that surrounds the large display. The interface leverages context from both the user's physical position in the room and the current data being visualized in order to interpret multi-touch gestures. It also utilizes progressive refinement, favoring several quick approximate gestures as opposed to a single complex input in order to most effectively map the small mobile multi-touch input space to the large display wall. The approach is evaluated through two interdisciplinary visualization applications: a multi-variate data visualization for social scientists, and a visual database querying tool for biochemistry. The interface was effective in both scenarios, leading to new domain-specific insights and suggesting valuable guidance for future developers.", "keywords": "multi-touch;progressive refinement;3d user interface;mobile device;3d tracking;ray casting;selection", "title": "Scaling up multi-touch selection and querying: Interfaces and applications for combining mobile multi-touch input with large-scale visualization displays"} {"abstract": "Increasing globalization of the economy is imposing tough challenges to manufacturing companies. The ability to produce highly customized products, in order to satisfy market niches, requires the introduction of new features in automation systems. Flexible manufacturing processes must be able to handle unforeseen events, but their complexity makes the supervision and maintenance task difficult to perform by human operators. This paper describes how linguistic equations (LE), an intelligent method derived from Fuzzy Algorithms, has been used in a decision-helping tool for electronic manufacturing. In our case the company involved in the project is mainly producing control cards for the automotive industry. In their business, nearly 70% of the cost of a product is material cost. Detecting defects and repairing the printed circuit boards is therefore a necessity. With an ever increasing complexity of the products, defects are very likely to occur, no matter how much attention is put into their prevention. Therefore, the system described in this paper comes into use only during the final testing of the product and is purely oriented towards the detection and localization of defects. Final control is based on functional testing. Using linguistic equations and expert knowledge, the system is able to analyze that data and successfully detect and trace a defect in a small area of the printed circuit board. If sufficient amount of data is provided, self-tuning and self-learning methods can be used. Diagnosis effectiveness can therefore be improved from detection of a functional area towards component level analysis.", "keywords": "linguistic equations;defect detection;diagnosis;knowledge;fuzzy logic", "title": "Knowledge-based linguistic equations for defect detection through functional testing of printed circuit boards"} {"abstract": "This paper uses the artificial neural networks (ANNs) approach to evolve an efficient model for estimation of cutting forces, based on a set of input cutting conditions. Neural network (NN) algorithms are developed for use as a direct modelling method, to predict forces for ball-end milling operations. Prediction of cutting forces in ball-end milling is often needed in order to establish automation or optimization of the machining processes. Supervised NNs are used to successfully estimate the cutting forces developed during end milling processes. The training of the networks is preformed with experimental machining data. The predictive capability of using analytical and NN approaches is compared. NN predictions for three cutting force components were predicted with 4% error by comparing with the experimental measurements. Exhaustive experimentation is conduced to develop the model and to validate it. By means of the developed method, it is possible to forecast the development of events that will take place during the milling process without executing the tests. The force model can be used for simulation purposes and for defining threshold values in cutting tool condition monitoring system. It can be used also in the combination for monitoring and optimizing of the machining process-cutting parameters.", "keywords": "machining;cutting forces;modeling;neural network;experimental measurements;milling", "title": "Dynamic neural network approach for tool cutting force modelling of end milling operations"} {"abstract": "Over the last 20years, humanities and archival scholars have theorized the ways in which archives imbue records with meaning. However, archival scholars have not sufficiently examined how users understand the meaning of the records they find. Building on the premise that how users come to make meaning from records is greatly in need of examination, this paper reports on a pilot study of four book history students and their processes of archival meaning-making. We focus in particular on behaviors of an interpretive rather than forensic nature. This article includes a discussion of the theoretical concepts and scholarly literature that shaped our goals for this paper. It then discusses the methodology and our interpretations of the research findings, before turning to a discussion of the findings implications and directions for future work.", "keywords": "meaning-making;information use;book history", "title": "Contexts built and found: a pilot study on the process of archival meaning-making"} {"abstract": "While inferring the geo-locations of web images has been widely studied, there is limited work engaging in geo-location inference of web videos due to inadequate labeled samples available for training. However, such a geographical localization functionality is of great importance to help existing video sharing websites provide location-aware services, such as location-based video browsing, video geo-tag recommendation, and location sensitive video search on mobile devices. In this paper, we address the problem of localizing web videos through transferring large-scale web images with geographic tags to web videos, where near-duplicate detection between images and video frames is conducted to link the visually relevant web images and videos. To perform our approach, we choose the trustworthy web images by evaluating the consistency between the visual features and associated metadata of the collected images, therefore eliminating the noisy images. In doing so, a novel transfer learning algorithm is proposed to align the landmark prototypes across both domains of images and video frames, leading to a reliable prediction of the geo-locations of web videos. A group of experiments are carried out on two datasets which collect Flickr images and YouTube videos crawled from the Web. The experimental results demonstrate the effectiveness of our video geo-location inference approach which outperforms several competing approaches using the traditional frame-level video geo-location inference.", "keywords": "web video analysis;cross-domain;social media;landmark recognition;classification", "title": "Localizing web videos using social images"} {"abstract": "Often, independent organizations de. ne and advocate different XML formats for a similar purpose and, as a result, application programs need to mutually convert between such formats. Existing XML transformation languages, such as XSLT and XDuce, are unsatisfactory for this purpose since we would have to write, e. g., two programs for the forward and the backward transformations in case of two formats, incur high developing and maintenance costs. This paper proposes the bidirectional XML transformation language biXid, allowing us to write only one program for both directions of conversion. Our language adopts a common paradigm programming-by-relation, where a program defines a relation over documents and transforms a document to another in a way satisfying this relation. Our contributions here are specific language features for facilitating realistic conversions whose target formats are loosely in parallel but have many discrepancies in details. Concretely, we ( 1) adopt XDuce-style regular expression patterns for describing and analyzing XML structures, ( 2) fully permit ambiguity for treating formats that do not have equivalent expressivenesses, and ( 3) allow non-linear pattern variables for expressing non-trivial transformations that cannot be written only with linear patterns, such as conversion between unordered and ordered data. We further develop an efficient evaluation algorithm for biXid, consisting of the \"parsing\" phase that transforms the input document to an intermediate \"parse tree\" structure and the \"unparsing\" phase that transforms it to an output document. Both phases use a variant of finite tree automata for performing a one-pass scan on the input or the parse tree by using a standard technique that \"maintains the set of all transitable states.\" However, the construction of the \"unparsing\" phase is challenging since ambiguity causes different ways of consuming the parse tree and thus results in multiple possible outputs that may have different structures. We have implemented a prototype system of biXid and confirmed that it has enough expressiveness and a linear-time performance from experiments with several realistic bidirectional transformations including one between vCard-XML and ContactXML.", "keywords": "xml;tree automata", "title": "biXid: A bidirectional transformation language for XML"} {"abstract": "Data warehousing is an approach to data integration wherein integrated information is stored in a data warehouse for direct querying and analysis. To provide fast access, a data warehouse stores materialized views of the sources of its data. As a result, a data warehouse needs to be maintained to keep its contents consistent with the contents of its data sources. Incremental maintenance is generally regarded as a more efficient way to maintain materialized views in a data warehouse. In this paper a strategy for the maintenance of data warehouse is presented. It has the following characteristics: it is self-maintainable (weak), incremental, non-blocking (the analysts transactions and the maintenance transaction are executed concurrently) and is performed in real time. The proposed algorithm is implemented for view definition SPJ (Select Project Join) queries and it calculates the aggregate functions: sum, avg, count, min and max. Aggregate functions are calculated like algebraic functions (the new result of the function can be computed using some small, constant size storage that accompanies the existing value of the aggregate). We have named this improved algorithm ?VNLTR (unlimited ?V (versions), NL (non-blocking), TR (in real time)).", "keywords": "self-maintenable;data warehouse", "title": "real time self-maintenable data warehouse"} {"abstract": "With the application of the Web 2.0 philosophy to more and more online services and platforms, tagging has become a well established collaboration method. It is often used to simplify organization, navigation and discovery of information and resources in huge archives. In parallel, due to recent developments in digital television, audiences are confronted with a rising amount of available content and demand for better ways to discover programs of interest. In this paper, we propose a tagging-based solution to this problem. Using a content-based filtering approach, we present an individualized and flexible tag generation process. User specific as well as collaborative tag generation is enabled. Based on generated and user added tags, program recommendations are derived in a collaborative filtering step.", "keywords": "metadata-based filtering;tag generation;folksonomy;bayesian classifier;tv recommender;collaborative filtering;recommendation system;tags;epg", "title": "content-based tag generation to enable a tag-based collaborative tv-recommendation system."} {"abstract": "We consider an M/M/1 queueing system with inventory under the ((r,Q)) policy and with lost sales, in which demands occur according to a Poisson process and service times are exponentially distributed. All arriving customers during stockout are lost. We derive the stationary distributions of the joint queue length (number of customers in the system) and on-hand inventory when lead times are random variables and can take various distributions. The derived stationary distributions are used to formulate long-run average performance measures and cost functions in some numerical examples.", "keywords": "queueing;inventory;stationary distribution;lost sale;regenerative process;", "title": "The M/M/1 queue with inventory, lost sale, and general lead times"} {"abstract": "The basis of dynamic data rectification is a dynamic process model. The successful application of the model requires the fulfilling of a number of objectives that are as wide-ranging as the estimation of the process states, process signal denoising and outlier detection and removal. Current approaches to dynamic data rectification include the conjunction of the Extended Kalman Filter (EKF) and the expectation-maximization algorithm. However, this approach is limited due to the EKF being less applicable where the state and measurement functions are highly non-linear or where the posterior distribution of the states is non-Gaussian. This paper proposes an alternative approach whereby particle filters, based on the sequential Monte Carlo method, are utilized for dynamic data rectification. By formulating the rectification problem within a probabilistic framework, the particle filters generate Monte Carlo samples from the posterior distribution of the system states, and thus provide the basis for rectifying the process measurements. Furthermore, the proposed technique is capable of detecting changes in process operation and thus complements the task of process fault diagnosis. The appropriateness of particle filters for dynamic data rectification is demonstrated through their application to an illustrative non-linear dynamic system, and a benchmark pH neutralization process. ", "keywords": "dynamic data rectification;filtering;particle filters;sequential monte carlo;state estimation", "title": "Dynamic data rectification using particle filters"} {"abstract": "In this paper we consider neighbor sensor networks which are defined as multiple wireless sensor networks under the administration of different authorities but located physically on the same area or close to each other. We construct a Linear Programming framework to characterize the cooperation of neighbor sensor networks in comparison to non-cooperating networks. We show that if neighbor sensor networks cooperate with each other for relaying data packets then this cooperation brings two advantages as compared to no cooperation case. First, lifetime of both networks is prolonged the results of our analysis show that cooperation between neighbor sensor networks can significantly extend the overall network lifetime. Second, cooperation reduces the probability of disjoint partitions arising due to the limited transmission ranges of sensor nodes. When neighbor sensor networks cooperate, eliminating disjoint partitions is possible with sensors having shorter transmission ranges as demonstrated and quantified by our analysis.", "keywords": "wireless sensor networks;linear programming;network lifetime;disjoint partition;cooperation", "title": "Neighbor sensor networks: Increasing lifetime and eliminating partitioning through cooperation"} {"abstract": "Transmission of block-coded images through error-prone radio mobile channel often results in lost blocks. Error concealment (EC) techniques exploit inherent redundancy and reduce visual artifacts through post-processing at the decoder side. In this paper, we propose an efficient quantization index modulation (QIM)-based data hiding scheme using dual-tree complex wavelet transform (DTCWT) for the application of image error concealment. The goal is achieved by embedding important information (image digest) as watermark signal that is extracted from the original image itself and is used to introduce sufficient redundancy in the transmitted image. At the decoder side, the extracted image digest is used to correct the damaged regions. DTCWT offers three-fold advantages viz. (1) high embedding capacity due to inherent redundancy that leads to the better reconstruction of high volume missing data, (2) better imperceptibly after data embedding since it most closely captures human visual system (HVS) characteristics than conventional DWT and (3) better watermark decoding reliability. Simulation results duly support the claims and relative performance improvement with respect to the existing results.", "keywords": "error concealment;data hiding;dtcwt;halftoning;qim;image digest", "title": "IMAGE ERROR CONCEALMENT BASED ON QIM DATA HIDING IN DUAL-TREE COMPLEX WAVELETS"} {"abstract": "The threat of cyber attacks motivates the need to monitor Internet traffic data for potentially abnormal behavior. Due to the enormous volumes of such data, statistical process monitoring tools, such as those traditionally used on data in the product manufacturing arena, are inadequate. \"Exotic\" data may indicate a potential attack; detecting such data requires a characterization of \"typical\" data. We devise some new graphical displays, including a \"skyline plot,\" that permit ready visual identification of unusual Internet traffic patterns in \"streaming\" data, and use appropriate statistical measures to help identify potential cyberattacks. These methods are illustrated on a moderate-sized data set (135,605 records) collected at George Mason University. ", "keywords": "logarithmic transformation;computational methods;recursive computation;graphical displays;exploratory data analysis", "title": "Visualizing \"typical\" and \"exotic\" Internet traffic data"} {"abstract": "We present data showing strong correlation between students' time management and a successful outcome on programming assignments. Students who spread their work over more time will produce a better result without additional expenditure of total effort. We examined performance of students who sometimes did well and sometimes did poorly, and found that their good performance occurred on the projects where they displayed better time management. While these results will not surprise most instructors, hard data is more compelling than intuition when trying to train students to use good time management.", "keywords": "time management;student performance;scheduling", "title": "scheduling and student performance"} {"abstract": "Security is a fundamental precondition for the acceptance of mobile agent systems. In this paper we present a mobile agent structure which supports authentication, security management and access control for mobile agents.", "keywords": "mobile agent security;agent authentication;key management;access control;access groups agent confidentiality", "title": "Access control and key management for mobile agents"} {"abstract": "This study focuses on an alignment-free sequence comparison method: the number of words of length k shared between two sequences, also known as the D2 D 2 statistic. The advantages of the use of this statistic over alignment-based methods are firstly that it does not assume that homologous segments are contiguous, and secondly that the algorithm is computationally extremely fast, the runtime being proportional to the size of the sequence under scrutiny. Existing applications of the D2 D 2 statistic include the clustering of related sequences in large EST databases such as the STACK database. Such applications have typically relied on heuristics without any statistical basis. Rigorous statistical characterisations of the distribution of D2 D 2 have subsequently been undertaken, but have focussed on the distribution's asymptotic behaviour, leaving the distribution of D2 D 2 uncharacterised for most practical cases. The work presented here bridges these two worlds to give usable approximations of the distribution of D2 D 2 for ranges of parameters most frequently encountered in the study of biological sequences.", "keywords": "alignment-free sequence comparison;biological sequences;genomic data", "title": "Empirical distribution of k-word matches in biological sequences"} {"abstract": "Experiment replication is a key component of the scientific paradigm. The purpose of replication is to verify previously observed findings. Although some Software Engineering (SE) experiments have been replicated, yet, there is still disagreement about how replications should be run in our field. With the aim of gaining a better understanding of how replications are carried out, this paper examines different replication types in other scientific disciplines. We believe that by analysing the replication types proposed in other disciplines it is possible to clarify some of the question marks still hanging over experimental SE replication.", "keywords": "software engineering;experimental paradigm;classifications of replications;types of replications;experimental replication", "title": "replications types in experimental disciplines"} {"abstract": "Broadcast capacity of the entire network is one of the fundamental properties of vehicular ad hoc networks (VANETs). It measures how efficiently the information can be transmitted in the network and usually it is limited by the interference between the concurrent transmissions in the physical layer of the network. This study defines the broadcast capacity of vehicular ad hoc network as the maximum successful concurrent transmissions. In other words, we measure the maximum number of packets which can be transmitted in a VANET simultaneously, which characterizes how fast a new message such as a traffic incident can be transmitted in a VANET. Integer programming (IP) models are first developed to explore the maximum number of successful receiving nodes as well as the maximum number of transmitting nodes in a VANET. The models embed an traffic flow model in the optimization problem. Since IP model cannot be efficiently solved as the network size increases, this study develops a statistical model to predict the network capacity based on the significant parameters in the transportation and communication networks. MITSIMLab is used to generate the necessary traffic flow data. Response surface method and linear regression technologies are applied to build the statistical models. Thus, this paper brings together an array of tools to solve the broadcast capacity problem in VANETs. The proposed methodology provides an efficient approach to estimate the performance of a VANET in real-time, which will impact the efficacy of travel decision making.", "keywords": "vehicular ad hoc networks;atis;broadcast capacity;information flow;optimization;integer program", "title": "Optimization models to characterize the broadcast capacity of vehicular ad hoc networks"} {"abstract": "Techniques based on agglomerative hierarchical clustering constitute one of the most frequent approaches in unsupervised clustering. Some are based on the single linkage methodology, which has been shown to produce good results with sets of clusters of various sizes and shapes. However, the application of this type of algorithms in a wide variety of fields has posed a number of problems, such as the sensitivity to outliers and fluctuations in the density of data points. Additionally, these algorithms do not usually allow for automatic clustering. In this work we propose a method to improve single linkage hierarchical cluster analysis (HCA), so as to circumvent most of these problems and attain the performance of most sophisticated new approaches. This completely automated method is based on a self-consistent outlier reduction approach, followed by the building-up of a descriptive function. This, in turn, allows to define natural clusters. Finally, the discarded objects may be optionally assigned to these clusters. The validation of the method is carried out by employing widely used data sets available from literature and others for specific purposes created by the authors. Our method is shown to be very efficient in a large variety of situations. ", "keywords": "clustering;unsupervised pattern recognition;hierarchical cluster analysis;single linkage;outlier removal", "title": "Improving hierarchical cluster analysis: A new method with outlier detection and automatic clustering"} {"abstract": "This paper uses the rigorous methods of mathematics to explore the analysis of the sensitivity of Park [Int. J. Syst. Sci. 13 (1982) 1313]. However, Park discusses the analysis of the sensitivity by numerical examples. The results obtained by this paper show that the sensitivity of Park is not always true sometimes. Therefore, the researchers may be very careful to use the conclusions of the analysis of the sensitivity made by numerical examples in general.", "keywords": "inventory model;partial backorders;sensitivity", "title": "The sensitivity of the inventory model with partial backorders"} {"abstract": "With the increase of communication demand and the emergence of new services. various innovative wireless technologies have been deployed recently. Free Space Optics (FSO) links combined with Radio over Fiber (RoF) technology can realize a cost-effective heterogeneous wireless access system for both urban and rural areas. In this paper, we introduce a newly developed advanced DWDM Radio-on-FSO (RoFSO) system capable of simultaneously transmitting multiple Radio Frequency (RF) signals carrying various wireless services including W-CDMA, WLAN IEEE802.1 lg and ISDB-T signals over FSO link. We present an experimental performance evaluation of transmitting RF signals using the RoFSO system over a I kin link under different deployment environment conditions, This, work represents a pioneering attempt, based on a realistic operational scenario, aiming at demonstrating the RoFSO system can be conveniently used as a reliable alternative broadband wireless technology for complementing optical fiber networks in areas where the deployment of optical fiber is not feasible.", "keywords": "radio on free space optics ;radio over fiber ;w-cdma;isdb-t;wlan", "title": "Performance Evaluation of an Advanced DWDM RoFSO System for Transmitting Multiple RF Signals"} {"abstract": "The objective of this paper is to identify the robot's location in a global map from solely sonar based information. This is achieved by using fuzzy sets to model sonar data and by using fuzzy triangulation to identify robot's position and orientation. As a result we obtain a fuzzy position region where each point in the region has a degree of certainty of being the actual position of the robot. ", "keywords": "fuzzy sets;uncertainty;sonar;mobile robots;localization", "title": "Sonar based mobile robot localization by using fuzzy triangulation"} {"abstract": "An application of the Fuzzy Inference System (FIS) for bruise colour recognition is suggested in the paper. Input information to the system will be taken from the images, which includes a bruise and surrounding healthy skin. There are formulated six basic colour groups for the bruise images - red, blue, yellow, brown, green and purple. The input variables of the FIS are connected with the information from the pixels of the images in some colour models (RGB, HSV or Lab). The output variables are the classes - the basic colour groups. Matlab environment was used for representation of the membership functions.", "keywords": "bruise age determination;bruise age;colours and image analysis;bruises", "title": "fuzzy representation for classification of basic bruise colours"} {"abstract": "Demolding force for thermal imprint process to polymethylmethacrylate (PMMA) film is examined by use of Si templates with various side wall profiles. Patterns with tapered side wall profile can be fabricated by control of etching conditions. Side wall profile can be smoothened by anisotropic etching by use of mixed solution of potassium hydroxide (KOH) solution and isopropyl-alcohol. It is confirmed that demolding force can be reduced when mold with tapered side wall pattern is used. Demolding force can be greatly reduced by KOH treatment. Especially, when the template with taper and smooth side wall patterns is used, demolding force is below our measurement system limit of 0.1kgf. It is confirmed that the KOH treatment is very effective in order to reduce demolding force.", "keywords": "imprint template;silicon deep etching;bosch process;scalloping;anisotropic etching;potassium hydroxide", "title": "Silicon template fabrication for imprint process with good demolding characteristics"} {"abstract": "Given the contemporary trend to modular NLP architectures and multiple annotation frameworks, the existence of concurrent tokenizations of the same text represents a pervasive problem in everydays NLP practice and poses a non-trivial theoretical problem to the integration of linguistic annotations and their interpretability in general. This paper describes a solution for integrating different tokenizations using a standoff XML format, and discusses the consequences from a corpus-linguistic perspective.", "keywords": "linguistic annotation;multi-layer annotation;conflicting tokenizations;tokenization alignment;corpus linguistics", "title": "By all these lovely tokens... Merging conflicting tokenizations"} {"abstract": "Quantum finite automata have been studied intensively since their introduction in late 1990s as a natural model of a quantum computer working with finite-dimensional quantum memory space. This paper seeks their direct application to interactive proof systems in which a mighty quantum prover communicates with a quantum-automaton verifier through a common communication cell. Our quantum interactive proof systems are juxtaposed to Dwork-Stockmeyer's classical interactive proof systems whose verifiers are two-way probabilistic finite automata. We demonstrate strengths and weaknesses of our systems by studying how various restrictions on the behaviors of quantum-automaton verifiers affect the power of quantum interactive proof systems. ", "keywords": "quantum finite automaton;quantum interactive proof system;quantum measurement;quantum circuit", "title": "An application of quantum finite automata to interactive proof systems"} {"abstract": "Simulations are currently an essential tool to develop and test wireless sensor networks (WSNs) protocols and to analyze future WSNs applications performance. Researchers often simulate their proposals rather than deploying high-cost test-beds or develop complex mathematical analysis. However, simulation results rely on physical layer assumptions, which are not usually accurate enough to capture the real behavior of a WSN. Such an issue can lead to mistaken or questionable results. Besides, most of the envisioned applications for WSNs consider the nodes to be at the ground level. However, there is a lack of radio propagation characterization and validation by measurements with nodes at ground level for actual sensor hardware. In this paper, we propose to use a low-computational cost, two slope, log-normal path-loss near ground outdoor channel model at 868 MHz in WSN simulations. The model is validated by extensive real hardware measurements obtained in different scenarios. In addition, accurate model parameters are provided. This model is compared with the well-known one slope path-loss model. We demonstrate that the two slope log-normal model provides more accurate WSN simulations at almost the same computational cost as the single slope one. It is also shown that the radio propagation characterization heavily depends on the adjusted model parameters for a target deployment scenario: The model parameters have a considerable impact on the average number of neighbors and on the network connectivity.", "keywords": "channel modeling;near ground propagation;simulation", "title": "An accurate radio channel model for wireless sensor networks simulation"} {"abstract": "In knowledge discovery in a text database, extracting and returning a subset of information highly relevant to a user's query is a critical task. In a broader sense, this is essentially identification of certain personalized patterns that drives such applications as Web search engine construction, customized text summarization and automated question answering. A related problem of text snippet extraction has been previously studied in information retrieval. In these studies, common strategies for extracting and presenting text snippets to meet user needs either process document fragments that have been delimitated a priori or use a sliding window of a fixed size to highlight the results. In this work, we argue that text snippet extraction can be generalized if the user's intention is better utilized. It overcomes the rigidness of existing approaches by dynamically returning more flexible startend positions of text snippets, which are also semantically more coherent. This is achieved by constructing and using statistical language models which effectively capture the commonalities between a document and the user intention. Experiments indicate that our proposed solutions provide effective personalized information extraction services.", "keywords": "text snippet extraction;personalization;language model;information retrieval;natural language processing;pattern discovery;hidden markov model", "title": "Personalized text snippet extraction using statistical language models"} {"abstract": "We find, in polynomial time, a schedule for a complete binary tree directed acyclic graph (dag) with n unit execution time tasks on a linear array whose makespan is optimal within a factor of 1 + o(1). Further, given a binary tree dag T with n tasks and height h, we find, in polynomial time, a schedule for T on a linear array whose makespan is optimal within a factor of 5 + o(1). On the other hand, we prove that explicit lower and upper bounds on the makespan of optimal schedules of binary tree dags on linear arrays differ at least by a factor of 1 + root 2/2. We also find, in polynomial time, schedules for bounded tree dags with n unit execution time tasks, degree d, and height h is an element of o(n(1/2)) boolean OR omega(n(1/2)) on a linear array which are optimal within a factor of 1 + o(1), this time under the assumption of links with unlimited bandwidth. Finally, we compute an improved upper bound on the makespan of an optimal schedule for a tree dag on the architecture independent model of Papadimitriou and Yannakakis [14], provided that its height is not too large.", "keywords": "multiprocessing;parallel computation;parallel architectures;communication delay;scheduling;tree dags;linear array;mesh array;tree decomposition", "title": "Upper and lower bounds on the makespan of schedules for tree dags on linear arrays"} {"abstract": "In many applications, a face recognition model learned on a source domain but applied to a novel target domain degenerates even significantly due to the mismatch between the two domains. Aiming at learning a better face recognition model for the target domain, this paper proposes a simple but effective domain adaptation approach that transfers the supervision knowledge from a labeled source domain to the unlabeled target domain. Our basic idea is to convert the source domain images to target domain (termed as targetize the source domain hereinafter), and at the same time keep its supervision information. For this purpose, each source domain image is simply represented as a linear combination of sparse target domain neighbors in the image space, with the combination coefficients however learnt in a common subspace. The principle behind this strategy is that, the common knowledge is only favorable for accurate cross-domain reconstruction, but for the classification in the target domain, the specific knowledge of the target domain is also essential and thus should be mostly preserved (through targetization in the image space in this work). To discover the common knowledge, specifically, a common subspace is learnt, in which the structures of both domains are preserved and meanwhile the disparity of source and target domains is reduced. The proposed method is extensively evaluated under three face recognition scenarios, i.e., domain adaptation across view angle, domain adaptation across ethnicity and domain adaptation across imaging condition. The experimental results illustrate the superiority of our method over those competitive ones.", "keywords": "face recognition;domain adaptation;common subspace learning;targetize the sourece domain", "title": "Domain Adaptation for Face Recognition: Targetize Source Domain Bridged by Common Subspace"} {"abstract": "Electronic health records are increasingly used to enhance availability, recovery, and transfer of health records. Newly developed online health systems such as Google-Health create new security and privacy risks. In this paper, we elucidate a clear threat model for online health information systems. We distinguish between privacy and security threats. In response to these risks, we propose a traitor-tracing solution, which embeds proof to trace an attacker who leaks data from a repository. We argue that the application of traitor-tracing techniques to online health systems can align incentives and decrease risks.", "keywords": "traitor-tracing schemes;information health systems;privacy;legal aspects", "title": "threat analysis of online health information system"} {"abstract": "Embryological development provides an inspiring example of the creation of complex hierarchical structures by self-organization. Likewise, biological metamorphosis shows how these complex systems can radically restructure themselves. Our research investigates these principles and their application to artificial systems in order to create intricately structured systems that are ordered from the nanoscale up to the macroscale. However these processes depend on mutually interdependent unfoldings of an information process and of the \"body\" in which it is occurring. Such embodied computation provides challenges as well as opportunities, and in order to fulfill its promise, we need both formal and informal models for conceptualizing, designing, and reasoning about embodied computation. This paper presents a preliminary design for one such model especially oriented toward artificial morphogenesis.", "keywords": "algorithmic assembly;embodied computation;embodiment;embryological development;metamorphosis;morphogenesis;nanotechnology;post-moore's law computing;reconfigurable systems;self-assembly;self-organization", "title": "Models and Mechanisms for Artificial Morphogenesis"} {"abstract": "This paper investigates the use of Euclidean invariant features in a generalization of iterative closest point registration of range images. Pointwise correspondences are chosen as the closest point with respect to a weighted linear combination of positional and feature distances. It is shown that under ideal noise-free conditions, correspondences formed using this distance function are correct more often than correspondences formed using the positional distance alone. In addition, monotonic convergence to at least a local minimum is shown to hold for this method. When noise is present, a method that automatically sets the optimal relative contribution of features and positions is described. This method trades off error in feature values due to noise against error in positions due to misalignment, Experimental results suggest that using invariant features decreases the probability of being trapped in a local minimum and may be an effective solution for difficult range image registration problems where the scene is very small compared to the model.", "keywords": "registration;range images;feature detection;invariance", "title": "ICP registration using invariant features"} {"abstract": "With the use of individual-level travel survey datasets describing the detailed activities of households, it is possible to analyze human movements with a high degree of precision. However, travel survey data are not without quality issues. Potential exists for origins and destinations of reported trips not to be geo-referenced, perhaps due to misreported information or inconsistencies in spatial address databases, which can limit the usefulness of the survey data. From an analytical standpoint, this is a serious problem because a single unreferenced stop in a trip record in effect renders that individuals data useless, especially in cases where analyzing chains of activity locations is of interest. This paper presents a framework and basic computational approach for exploring unlocatable activity locations inherent to travel surveys. Derived from recent work in developing a network-based, probabilistic time geography, the proposed methods are able to estimate the likely locations of missing trip origins and destinations. The methods generate probabilistic potential path trees which are used to visualize and quantify potential locations for the unreferenced destinations. The methods are demonstrated with simulated survey data from a smaller metropolitan area.", "keywords": "time geography;travel surveys;spatial behavior;geocoding;visualization;probability;error;networks", "title": "Where were you? Development of a time-geographic approach for activity destination re-construction"} {"abstract": "We present new primal-dual algorithms for several network design problems. The problems considered are the generalized Steiner tree problem (GST), the directed Steiner tree problem (DST), and the set cover problem (SC) which is a subcase of DST. All our problems are NP-hard; so we are interested in their approximation algorithms. First, we give an algorithm for DST which is based on the traditional approach of designing primal-dual approximation algorithms. We show that the approximation factor of the algorithm is k, where k is the number of terminals. in the case when the problem is restricted to quasi-bipartite graphs. We also give pathologically bad examples for the algorithm performance. To overcome the problems exposed by the bad examples, we design a new framework for primal-dual algorithms which can be applied to all of our problems. The main feature of the new approach is that, unlike the traditional primal-dual algorithms, it keeps the dual solution in the interior of the dual feasible region. The new approach allows us to avoid including too many arcs in the solution, and thus achieves a smaller-cost solution. Our computational results show that the interior-point version of the primal-dual most of the time performs better than the original primal-dual method. ", "keywords": "steiner tree;integer programming;approximation algorithm;primal-dual algorithm", "title": "New primal-dual algorithms for Steiner tree problems"} {"abstract": "In this study, an expert trajectory was proposed for control of nuclear research reactors. The trajectory being followed by the reactor power is composed of three parts. In order to calculate periods at the midpoint of each part of the trajectory, a period generator was designed based on artificial neural networks. The contribution of the expert trajectory to the reactor control system was investigated. Furthermore, the behavior of the controller with the expert trajectory was tested for various initial and desired power levels, as well as under disturbance. It was seen that the controller could control the system successfully under all conditions within the acceptable error tolerance.", "keywords": "nuclear reactor control;neural networks;trajectory planning", "title": "An expert trajectory design for control of nuclear research reactors"} {"abstract": "Facebook and Renren use are positively associated with bridging social capital. Facebooks relationship with bridging social capital is stronger than Renren. Renren use is positively associated with maintaining home country social capital.", "keywords": "facebook;renren;social networking sites;social capital;chinese international students", "title": "Facebook or Renren? A comparative study of social networking site use and social capital among Chinese international students in the United States"} {"abstract": "Identifying outstanding phishing features that best fit the Iranian bank websites. Extracting a reduct of influential indicators in phishing detection for Iranian e-banking system using rough sets theory. Determining critical phishing detection rules and forming a flexible rule base for phishing detection. Building a fuzzyrough hybrid system as a core processing unit of phishing detection applications or web browser add-ons and extensions concentrated on Iranian e-banking. Applying the proposed system on Iranian phishing sites and achieving an efficiency of 88%.", "keywords": "e-banking;phishing;fraud detection;fuzzy expert system;rough sets theory", "title": "Detection of phishing attacks in Iranian e-banking using a fuzzyrough hybrid system"} {"abstract": "We present an alternative video-making framework for children with tools that integrate video capture with movie production. We propose different forms of interaction with physical artifacts to capture storytelling. Play interactions as input to video editing systems assuage the interface complexities of film construction in commercial software. We aim to motivate young users in telling their stories, extracting meaning from their experiences by capturing supporting video to accompany their stories, and driving reflection on the outcomes of their movies. We report on our design process over the course of four research projects that span from a graphical user interface to a physical instantiation of video. We interface the digital and physical realms using tangible metaphors for digital data, providing a spontaneous and collaborative approach to video composition. We evaluate our systems during observations with 4- to 14-year-old users and analyze their different approaches to capturing, collecting, editing, and performing visual and sound clips.", "keywords": "children;interaction design;storytelling;tangible user interfaces;video", "title": "Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects"} {"abstract": "Using the 2011 Brisbane flood as a case study. Respondents perceptions of the importance of travel/traffic information were modelled. The hysteresis phenomenon in respondents perceived information importance. Socio-demographic features have a significant impact on such perceptions. No evidence of the influence of travel/traffic information on respondents travel mode.", "keywords": "travel information;traffic information;travel behaviour;adverse weather;natural disaster;random-effects ordered logit", "title": "Exploring association between perceived importance of travel/traffic information and travel behaviour in natural disasters: A case study of the 2011 Brisbane floods"} {"abstract": "We propose a computational model which computes the importance of 2-D object shape parts, and we apply it to detect and localize objects with and without occlusions. The importance of a shape part (a localized contour fragment) is considered from the perspective of its contribution to the perception and recognition of the global shape of the object. Accordingly, the part importance measure is defined based on the ability to estimate/recall the global shapes of objects from the local part, namely the parts shape reconstructability. More precisely, the shape reconstructability of a part is determined by two factorspart variation and part uniqueness. (i) Part variation measures the precision of the global shape reconstruction, i.e. the consistency of the reconstructed global shape with the true object shape; and (ii) part uniqueness quantifies the ambiguity of matching the part to the object, i.e. taking into account that the part could be matched to the object at several different locations. Taking both these factors into consideration, an information theoretic formulation is proposed to measure part importance by the conditional entropy of the reconstruction of the object shape from the part. Experimental results demonstrate the benefit with the proposed part importance in object detection, including the improvement of detection rate, localization accuracy, and detection efficiency. By comparing with other state-of-the-art object detectors in a challenging but common scenario, object detection with occlusions, we show a considerable improvement using the proposed importance measure, with the detection rate increased over (10~%). On a subset of the challenging PASCAL dataset, the Interpolated Average Precision (as used in the PASCAL VOC challenge) is improved by 48%. Moreover, we perform a psychological experiment which provides evidence suggesting that humans use a similar measure for part importance when perceiving and recognizing shapes.", "keywords": "shape part;part importance;shape reconstruction;object recognition and detection", "title": "A Shape Reconstructability Measure of Object Part Importance with Applications to Object Detection and Localization"} {"abstract": "We present a novel technological approach for the in situ realization of a micron-thin poly-acrylamide membrane in the center of a microfluidic channel. The membrane is formed by interfacial polymerization of an inner stream of monomer solution in between two streams of initiator/catalyst solution in a hydrodynamic focusing chip. 20?m thick SU-8 structures are used to replicate the chip in poly dimethylsiloxane (PDMS). The chip is fitted within an adaptor allowing easy fluidic connections, temperature control and optical monitoring under a microscope. With this system, we can easily tune the internal stream width from 5 to 100?m, by varying the internal/external flow ratio.", "keywords": "microfluidics;hydrodynamic focusing;poly-acrylamide membrane;interfacial polymerization", "title": "In situ fabrication of a poly-acrylamide membrane in a microfluidic channel"} {"abstract": "A new multi-agent algorithm inspired by a collision between two objects in one-dimension is presented. An enhanced colliding bodies optimization which uses memory to save some best solutions is developed. A mechanism is utilized to escape from local optima. Performance of the proposed algorithm is compared to those of standard CBO and some optimization techniques.", "keywords": "colliding bodies optimization ;coefficient of restitution ;enhanced colliding bodies optimization ;colliding memory ;discrete and continuous optimization;optimum design of truss structures", "title": "Enhanced colliding bodies optimization for design problems with continuous and discrete variables"} {"abstract": "Understanding goals and preferences behind a user's online activities can greatly help information providers, such as search engine and E-Commerce web sites, to personalize contents and thus improve user satisfaction. Understanding a user's intention could also provide other business advantages to information providers. For example, information providers can decide whether to display commercial content based on user's intent to purchase. Previous work on Web search defines three major types of user search goals for search queries: navigational, informational and transactional or resource [1][7]. In this paper, we focus our attention on capturing commercial intention from search queries and Web pages, i.e., when a user submits the query or browse a Web page, whether he/she is about to commit or in the middle of a commercial activity, such as purchase, auction, selling, paid service, etc. We call the commercial intentions behind a user's online activities as OCI (Online Commercial Intention). We also propose the notion of \"Commercial Activity Phase\" (CAP), which identifies in which phase a user is in his/her commercial activities: Research or Commit. We present the framework of building machine learning models to learn OCI based on any Web page content. Based on that framework, we build models to detect OCI from search queries and Web pages. We train machine learning models from two types of data sources for a given search query: content of algorithmic search result page(s) and contents of top sites returned by a search engine. Our experiments show that the model based on the first data source achieved better performance. We also discover that frequent queries are more likely to have commercial intention. Finally we propose our future work in learning richer commercial intention behind users' online activities.", "keywords": "help; framework ;understandability;activation;search intention;paging;online commercial intention;oci;user satisfaction;examples;goals;personality;performance;experience;browse;model;paper;informal;business;user;online;search;preference;training;attention;svm;web search;contention;research;users;search engine;web page;types;auction;intention;data;web site;queries;machine learning;commit;learning;future;transaction;resource;query", "title": "detecting online commercial intention (oci)"} {"abstract": "In this paper, we articulate the role of movement within a perceptual-motor view of tangible interaction. We argue that the history of humanproduct interaction design has exhibited an increasing neglect of the intrinsic importance of movement. On one hand, humanproduct interaction design has shown little appreciation in practice of the centrality of our bodily engagement in the world. This has resulted in technologies that continue to place demands on our cognitive abilities, and deny us the opportunity of building bodily skill. On the other hand, the potential for movement in products to be a meaningful component of our interaction with them has also been ignored. Both of these directions (design for bodily engagement and the expressiveness of product movements) are sketched out, paying particular respect for their potential to impact both interaction aesthetics and usability. We illustrate a number of these ideas with examples.", "keywords": "movement;tangible interaction;aesthetics;motor skill;expression;robotics", "title": "Easy doesnt do it: skill and expression in tangible aesthetics"} {"abstract": "This paper shows. that the worst case switching pattern that incurs the longest bus delay While considering the RLC effect is quite different from that while considering the RC effect alone. It implies that the existing encoding schemes based on the RC model may not improve or possibly worsen the delay when the inductance effects become dominant. A bus-invert method is also proposed to reduce the on-chip bus delay based on the RLC model. Simulation results show that the proposed encoding scheme significantly reduces the worst case coupling delay of the inductance-dominated buses.", "keywords": "bus-invert method;coupling;inductance;interconnect delay;worst case switching pattern", "title": "RLC coupling-aware simulation and on-chip bus encoding for delay reduction"} {"abstract": "In information systems, most research on knowledge management assumes that knowledge has positive implications for organizations. However, knowledge is a double-edged sword: while too little might result in expensive mistakes, too much might result in unwanted accountability. The purpose of this paper is to highlight the lack of attention paid to the unintended consequences of managing organizational knowledge and thereby to broaden the scope of IS-based knowledge management research. To this end, this paper analyzes the IS literature on knowledge management. Using a framework developed by Deetz (1996), research articles published between 1990 and 2000 in six IS journals are classified into one of four scientific discourses. These discourses are the normative, the interpretive, the critical, and the dialogic. For each of these discourses, we identify the research focus, the metaphors of knowledge, the theoretical foundations, and the implications apparent in the articles representing it. The metaphors of knowledge that emerge from this analysis are knowledge as object, asset, mind, commodity, and discipline. Furthermore, we present a paper that is exemplary of each discourse. Our objective with this analysis is to raise IS researchers' awareness of the potential and the implications of the different discourses in the study of knowledge and knowledge management.", "keywords": "epistemology;knowledge;knowledge management", "title": "Studying knowledge management in information systems research: Discourses and theoretical assumptions"} {"abstract": "Introducing the Partial Information Network Query (PINQ) problem. Developing a parameterized algorithm for PINQ. For topology-free network queries: improving upon previous running times. For two types of alignment network queries: improving upon previous running times.", "keywords": "partial information network query ;alignment network query ;topology-free network query ", "title": "Partial Information Network Queries"} {"abstract": "Banks currently have a great interest in internal audits to reduce risks, to prevent themselves from insolvency, and to take quick action for financial incidents. This study presents an integrated audit approach of rule-based and case-based reasoning, which includes two stages of reasoning, i.e., screening stage based on rule-based reasoning and auditing stage based on case-based reasoning. Rule-based reasoning uses induction rules to determine whether a new problem should be inspected further or not. Case-based reasoning performs similarity-based matching to find the most similar case in case base to the new problem. The method presented is applied to data of internal audits of a bank.", "keywords": "bank audit;rule base;case base;reasoning;similarity", "title": "Rule-based and case-based reasoning approach for internal audit of bank"} {"abstract": "Many software model checkers only detect counterexamples with deep loops after exploring numerous spurious and increasingly longer counterexamples. We propose a technique that aims at eliminating this weakness by constructing auxiliary paths that represent the effect of a range of loop iterations. Unlike acceleration, which captures the exact effect of arbitrarily many loop iterations, these auxiliary paths may under-approximate the behaviour of the loops. In return, the approximation is sound with respect to the bit-vector semantics of programs. Our approach supports arbitrary conditions and assignments to arrays in the loop body, but may as a result introduce quantified conditionals. To reduce the resulting performance penalty, we present two quantifier elimination techniques specially geared towards our application. Loop under-approximation can be combined with a broad range of verification techniques. We paired our techniques with lazy abstraction and bounded model checking, and evaluated the resulting tool on a number of buffer overflow benchmarks, demonstrating its ability to efficiently detect deep counterexamples in C programs that manipulate arrays.", "keywords": "model checking;loop acceleration;underapproximation;counterexamples", "title": "Under-approximating loops in C programs for fast counterexample detection"} {"abstract": "This communication addresses the problem of robust target localization in distributed multiple-input multiple-output (MIMO) radar using possibly outlier range measurements. To achieve robustness against outliers, we construct an objective function for MIMO target localization via the maximum correntropy criterion. To deal with such a nonconvex and nonlinear function, we apply a half-quadratic optimization technique to determine the target position and auxiliary variables alternately. Especially, we derive a semidefinite relaxation formulation for the aforementioned position determination step. The robust performance of the developed approach is demonstrated by comparing with several conventional localization methods via computer simulation.", "keywords": "target localization;multiple-input multiple-output radar;nonlinear optimization;nonconvex optimization;semidefinite relaxation;maximum correntropy criterion;non-line-of-sight ", "title": "Robust MIMO radar target localization via nonconvex optimization"} {"abstract": "Few invasion biologists consider the long-term evolutionary context of an invading organism and its invaded ecosystem. Here, I consider patterns of plant invasions across Eastern North America, Europe, and East/Far East Asia, and explore whether biases in exchanges of plants from each region reflect major selection pressures present within each region since the late Miocene, during which temperate Northern Hemisphere floras diverged taxonomically and ecologically. Although there are many exceptions, the European flora appears enriched in species well adapted to frequent, intense disturbances such as cultivation and grazing; the North American composite (Asteraceae) flora appears particularly well adapted to nutrient-rich meadows and forest openings; and the East Asian flora is enriched in shade-tolerant trees, shrubs, and vines of high forest-invasive potential. I argue that such directionality in invasions across different habitat types supports the notion that some species are preadapted to become invasive as a result of differences in historical selection pressures between regions.", "keywords": "preadaptation;eastern north america;naturalized plants;invasion biology", "title": "Plant invasions across the Northern Hemisphere: a deep-time perspective"} {"abstract": "Many classification tasks are not entirely suitable for supervised learning. Instead of individual feature vectors, bags of feature vectors can be considered. Many learning scenarios with bags in training and/or test phase have been proposed. We provide an overview and taxonomy of these learning scenarios.", "keywords": "multiple instance learning;set classification;group-based classification;label dependencies;weakly labeled data", "title": "On classification with bags, groups and sets"} {"abstract": "Support for Computer Supported Collaborative Blended Learning scripts is proposed. Requirements are: replicability, adaptability, flexibility, scalability. The system integrates several technologies and derives a more general architecture. An experiment, based on a previous script, was conducted to validate the proposal. The findings show that the script reduces the management workload.", "keywords": "cscl scripts;orchestration;ims learning design;service integration;case study", "title": "Technological support for the enactment of collaborative scripted learning activities across multiple spatial locations"} {"abstract": "New methods are presented for parallel simulation of discrete event systems that, when applicable, can usefully employ a number of processors much larger than the number of objects in the system being simulated. Abandoning the distributed event list approach, the simulation problem is posed using recurrence relations. We bring three algorithmic ideas to bear on parallel simulation: parallel prefix computation, parallel merging , and iterative folding . Efficient parallel simulations are given for (in turn) the G/G/1 queue, a variety of queueing networks having a global first come first served structure (e.g., a series of queues with finite buffers), acyclic networks of queues, and networks of queues with feedbacks and cycles. In particular, the problem of simulating the arrival and departure times for the first N jobs to a single G/G/1 queue is solved in time proportional to N/P + log P using P processors.", "keywords": "network;processor;queueing networks;structure;method;simulation;systems;efficiency;parallel simulation;log;event;parallel;computation;object;timing;feedback;queue;distributed;buffers;global", "title": "unboundedly parallel simulations via recurrence relations"} {"abstract": "Assembly, one of the oldest forms of industrial production, and its twin area, disassembly, have enjoyed tremendous modernization in the era of the information revolution. New enabling technologies, including prominent examples such as virtual CAD, Design for Assembly and Disassembly ( DFAD), robotic and intelligent assembly, and Flexible Assembly ( FA), are now becoming commonplace. This article reviews some of the newer solutions, and an extended framework for Cooperation Requirement Planning (ECRP) in robotic assembly/ disassembly is developed. Recent research under the PRISM program at Purdue University to enable ECRP and other relevant projects is presented. The challenges to researchers in this field, in adapting these solutions to the emerging environment of global and local supply networks are also discussed.", "keywords": "error recovery;conflict resolution;design for assembly;artificial intelligence;design for disassembly", "title": "Assembly and disassembly: An overview and framework for cooperation requirement planning with conflict resolution"} {"abstract": "A robust MILP approach is proposed for self-scheduling of hybrid CSPfossil fuel plant. Uncertainty is introduced in the model by asymmetric prediction intervals. The robustness cost is controlled by the budget of uncertainty. Plant self-scheduling and bidding strategies in a day-ahead market are simultaneously considered.", "keywords": "asymmetric uncertainty;backup system;bidding strategies;hybrid csp plant;robust optimisation;self-scheduling", "title": "Robust optimisation for self-scheduling and bidding strategies of hybrid CSPfossil power plants"} {"abstract": "With the tremendous advances in hand-held computing and communication capabilities, rapid proliferation of mobile devices, and decreasing device costs, we are seeing a growth in mobile e-business in various consumer and business markets. In this paper, we present a novel architecture and framework for end-to-end mobile e-business applications such as purchasing, retail point of sales, and order management. The design takes into consideration disconnection, application context and failure modes to provide mobile users with seamless and transparent access to commerce and content activities. In our architecture, we consider a novel business process design based on state-machines and event management to handle disconnection and resource limitations. We designed, implemented and deployed a system for mobile e-business on clients integrated with private exchanges and sell-side servers. The e-business framework on mobile clients is implemented based on J2ME and open XML standards. A performance study of simple e-business transactions was done on the client using the above mechanisms and programming environment. We show that the performance of a purchasing process using the framework does reasonably well.", "keywords": "self-managing systems;context-driven computing;mobile computing;workflow;disconnected computing;mobile e-business", "title": "self-managing, disconnected processes and mechanisms for mobile e-business"} {"abstract": "We review the method of Parallel Factor Analysis, which simultaneously fits multiple two-way arrays or 'slices' of a three-way array in terms of a common set of factors with differing relative weights in each 'slice'. Mathematically, it is a straightforward generalization of the bilinear model of factor (or component) analysis (x(ij) = SIGMA(r=1)R a(ir)b(jr)) to a trilinear model (x(ijk) = SIGMA(r=1)R a(ir)b(jr)c(kr)). Despite this simplicity, it has an important property not possessed by the two-way model: if the latent factors show adequately distinct patterns of three-way variation, the model is fully identified; the orientation of factors is uniquely determined by minimizing residual error, eliminating the need for a separate 'rotation' phase of analysis. The model can be used several ways. It can be directly fit to a three-way array of observations with (possibly incomplete) factorial structure, or it can be indirectly fit to the original observations by fitting a set of covariance matrices computed from the observations, with each matrix corresponding to a two-way subset of the data. Even more generally, one can simultaneously analyze covariance matrices computed from different samples, perhaps corresponding to different treatment groups, different kinds of cases, data from different studies, etc. To demonstrate the method we analyze data from an experiment on right vs. left cerebral hemispheric control of the hands during various tasks. The factors found appear to correspond to the causal influences manipulated in the experiment, revealing their patterns of influence in all three ways of the data. Several generalizations of the parallel factor analysis model are currently under development, including ones that combine parallel factors with Tucker-like factor 'interactions'. Of key importance is the need to increase the method's robustness against nonstationary factor structures and qualitative (nonproportional) factor change.", "keywords": "3-way exploratory factor analysis;unique axes;parallel proportional profiles;factor rotation problem;3-way data preprocessing;3 mode principal components;trilinear decomposition;trilinear model;multidimensional scaling;longitudinal factor analysis;factor analysis of spectra;interpretation of factors;real or causal or explanatory factors;tucker,l.r.;cattel,r.b.", "title": "PARAFAC - PARALLEL FACTOR-ANALYSIS"} {"abstract": "Helper threading is a technology to accelerate a program by exploiting a processor's multithreading capability to run \"assist\" threads. Previous experiments on hyper-threaded processors have demonstrated significant speedups by using helper threads to prefetch hard-to-predict delinquent data accesses. In order to apply this technique to processors that do not have built-in hardware support for multithreading, we introduce virtual multithreading (VMT), a novel form of switch-on-event user-level multithreading, capable of fly-weight multiplexing of event-driven thread executions on a single processor without additional operating system support. The compiler plays a key role in minimizing synchronization cost by judiciously partitioning register usage among the user-level threads. The VMT approach makes it possible to launch dynamic helper thread instances in response to long-latency cache miss events, and to run helper threads in the shadow of cache misses when the main thread would be otherwise stalled. The concept of VMT is prototyped on an Itanium (R) 2 processor using features provided by the Processor Abstraction Layer (PAL) firmware mechanism already present in currently shipping processors. On a 4-way MP physical system equipped with VMT-enabled Itanium 2 processors, helper threading via the VMT mechanism can achieve significant performance gains for a diverse set of real-world workloads, ranging from single-threaded workstation benchmarks to heavily multithreaded large scale decision support systems (DSS) using the IBM DB2 Universal Database. We measure a wall-clock speedup of 5.8% to 38.5% for the workstation benchmarks, and 5.0% to 12.7% on various queries in the DSS workload.", "keywords": "helper thread;cache miss prefetching;multithreading;switch-on-event;itanium processor;pal;db2 database", "title": "Helper threads via virtual multithreading on an experimental ltanium (R) 2 processor-based platform"} {"abstract": "The problem of emergency department (ED) overcrowding has reached crisis proportions in the last decade. In 2005, the National Academy of Engineering and the Institute of Medicine reported on the important role of simulation as a systems analysis tool that can have an impact on care processes at the care-team, organizational, and environmental levels. Simulation has been widely used to understand causes of ED overcrowding and to test interventions to alleviate its effects. In this paper, we present a systematic review of ED simulation literature from 1970 to 2006 from healthcare, systems engineering, operations research and computer science publication venues. The goals of this review are to highlight the contributions of these simulation studies to our understanding of ED overcrowding and to discuss how simulation can be better used as a tool to address this problem. We found that simulation studies provide important insights into ED overcrowding but they also had major limitations that must be addressed.", "keywords": "emergency department simulations;literature review;emergency department;overcrowding;simulation", "title": "A Systematic Review of Simulation Studies Investigating Emergency Department Overcrowding"} {"abstract": "A common industrial operation is a dual resource constrained job shop where: (a) the objective is to minimize L-max, the maximum job lateness; (b) machines are organized into groups; and ", "keywords": "job shop scheduling;dual resource constrained systems;maximum lateness;worker allocation", "title": "An effective lower bound on L-max in a worker-constrained job shop"} {"abstract": "In this paper, we address the problem of relation extraction of multiple arguments where the relation of entities is framed by multiple attributes. Such complex relations are successfully extracted using a syntactic tree-based pattern matching method. While induced subtree patterns are typically used to model the relations of multiple entities, we argue that hard pattern matching between a pattern database and instance trees cannot allow us to examine similar tree structures. Thus, we explore a tree alignment-based soft pattern matching approach to improve the coverage of induced patterns. Our pattern learning algorithm iteratively searches the most influential dependency tree patterns as well as a control parameter for each pattern. The resulting method outperforms two baselines, a pairwise approach with the tree-kernel support vector machine and a hard pattern matching method, on two standard datasets for a complex relation extraction task.", "keywords": "relation extraction;multiple arguments;pattern induction;local tree alignment;soft pattern matching", "title": "A local tree alignment approach to relation extraction of multiple arguments"} {"abstract": "Purpose - To confirm that the purpose of the FESD project has been to provide a framework Accepted 24 February 2005 contract for the whole public sector covering the purchase of an EDM system, technical and organisational consulting for implementation and organisational change. Design/methodology/approach - The project took the approach of working closely together with 11 partnering organisations on developing the functional requirements for the system and participating in the tender negotiations with the bidding consortia. This has proved valuable, since the project has gained a profound legitimacy for its demands and a strong basis for the roll-out in the rest of the public sector. Findings - The results of the project are manifold: for the first time in the Danish public sector a mutual framework contract has made it possible to put the same requirements forward to the bidding vendors. It has made it possible to develop mutual technical standards and to develop standardised work processes supported by the systems. Furthermore, a number of long-term findings will become evident over the next two years when the implementation projects begin to show results. Practical implications - Originally it was one of the major tasks of the FESD project to show efficiency gains and return of investment within the project's life span. This has not been possible due to the fact that the implementation projects in the partnering organisations are far from finished. Also, efficiency gains are not always part of the success criteria and it may turn out that efficiency gains weigh more in the minds of planners than in the real implementation projects. Originality/value - The article is a report from a country highly esteemed for its efforts in pushing public digital administration in order to create better service and higher efficiency.", "keywords": "document management;electronic document delivery;public sector organizations;organizational change", "title": "EDM in the Danish public sector: the FESD project"} {"abstract": "We develop a locally conservative formulation of the discontinuous PetrovGalerkin finite element method (DPG) for convectiondiffusion type problems using Lagrange multipliers to exactly enforce conservation over each element. We provide a proof of convergence as well as extensive numerical experiments showing that the method is indeed locally conservative. We also show that standard DPG, while not guaranteed to be conservative, is nearly conservative for many of the benchmarks considered. The new method preserves many of the attractive features of DPG, but turns the normally symmetric positive-definite DPG system into a saddle point problem.", "keywords": "discontinuous petrovgalerkin;local conservation;convection-diffusion;stokes flow;least squares;minimum residual;higher order;adaptive mesh refinement", "title": "Locally conservative discontinuous PetrovGalerkin finite elements for fluid problems"} {"abstract": "Many viruses of interest, such as influenza A, have distinct segments in their genome. The evolution of these viruses involves mutation and reassortment, where segments are interchanged between viruses that coinfect a host. Phylogenetic trees can be constructed to investigate the mutation-driven evolution of individual viral segments. However, reassortment events among viral genomes are not well depicted in such bifurcating trees. We propose the concept of reassortment networks to analyze the evolution of segmented viruses. These are layered graphs in which the layers represent evolutionary stages such as a temporal series of seasons in which influenza viruses are isolated. Nodes represent viral isolates and reassortment events between pairs of isolates. Edges represent evolutionary steps, while weights on edges represent edit costs of reassortment and mutation events. Paths represent possible transformation series among viruses. The length of each path is the sum edit cost of the events required to transform one virus into another. In order to analyze tau stages of evolution of n viruses with segments of maximum length m, we first compute the pairwise distances between all corresponding segments of all viruses in O(m(2)n(2)) time using dynamic programming. The reassortment network, with O(tau n(2)) nodes, is then constructed using these distances. The ancestors and descendents of a specific virus can be traced via shortest paths in this network, which can be found in O(tau n(3)) time.", "keywords": "influenza a;dynamic programming;reassortment;segmented virus;shortest paths", "title": "Reassortment Networks for Investigating the Evolution of Segmented Viruses"} {"abstract": "With the expanding availability and capability of varied technologies, classroom-based problem solving has become an increasingly attainable, yet still elusive, goal. Evidence of technology-enhanced problem-solving teaching and learning in schools has been scarce, understanding how to support students' problem solving in classroom-based, technology-enhanced learning environments has been limited, and coherent frameworks to guide implementation have been slow to emerge. Whereas researchers have examined the use and impact of scaffolds in mathematics, science, and reading, comparatively little research has focused on scaffolding learning in real-world, everyday classroom settings. Web-based systems have been developed to support problem solving, but implementations suggest variable enactment and inconsistent impact. The purpose of this article is to identify critical issues in scaffolding students' technology-enhanced problem solving in everyday classrooms. First, we examine two key constructs (problem solving and scaffolding) and propose a framework that includes essential dimensions to be considered when teachers scaffold student problem solving in technology-rich classes. We then investigate issues related to peer-, teacher-, and technology-enhanced scaffolds, and conclude by examining implications for research. ", "keywords": "scaffolding;scaffolds;technology-enhanced classrooms;problem solving;scientific inquiry;technology-enhanced learning environments ;technology integration", "title": "Scaffolding problem solving in technology-enhanced learning environments (TELEs): Bridging research and theory with practice"} {"abstract": "In this paper the results of a study of objective quality measures for a broad range of coding systems are presented. These objective measures take the linear and the nonlinear distortions of the coder into account. A correlation analysis was performed, in order to find out those measures which are most effective in predicting perceivable parametric attributes of speech quality. The results of this experiment, the so-called attribute-matching, yield a good composite measure for predicting the total quality for a wide range of coding systems and can be computed in pseudo-realtime. Furthermore, we describe the test signal we have used in our study, which was not natural speech but a speech-model process.", "keywords": "speech quality;objective quality-measures;attribute-matching;speech-model process", "title": "A NEW APPROACH TO OBJECTIVE QUALITY-MEASURES BASED ON ATTRIBUTE-MATCHING"} {"abstract": "The discrimination problem is of major interest in fields such as environmental management, human resources management, production management, finance, marketing, medicine, etc. For decades this problem has been studied from a multivariate statistical point of view. Recently the possibilities of new approaches have been explored, based mainly on mathematical programming. This paper follows the methodological framework of multicriteria decision aid (MCDA), to propose a new method for multigroup discrimination based on a hierarchical procedure (Multi-Group Hierarchical Discrimination-M.H.DIS). The performance of the M.H.DIS method is evaluated along with eight real world case studies from the fields of finance and marketing. A comparison is also performed with other MCDA methods.", "keywords": "discrimination;multicriteria decision aid;preference disaggregation;goal programming", "title": "Building additive utilities for Multi-Group Hierarchical Discrimination: The MHDIS method"} {"abstract": "Dynamic 3D representations enhanced students performance. Dynamic 3D representations fostered students to allocate greater attention. Eye movements could predict students 3D mental models of an atomic orbital. Low-spatial-ability students with dynamic 3D representations spent more attention.", "keywords": "atomic orbital concept;mental model;spatial ability;eye-tracking", "title": "The effects of static versus dynamic 3D representations on 10th grade students atomic orbital mental model construction: Evidence from eye movement behaviors"} {"abstract": "This paper presents a novel particle swarm optimization (PSO) based on a non-homogeneous Markov chain and differential evolution (DE) for quantification analysis of the lateral flow immunoassay (LFIA), which represents the first attempt to estimate the concentration of target analyte based on the well-established state-space model. A new switching local evolutionary PSO (SLEPSO) is developed and analyzed. The velocity updating equation jumps from one mode to another based on the non-homogeneous Markov chain, where the probability transition matrix is updated by calculating the diversity and current optimal solution. Furthermore, DE mutation and crossover operations are implemented to improve local best particles searching in PSO. Compared with some well-known PSO algorithms, the experiments results show the superiority of proposed SLEPSO. Finally, the new SLEPSO is successfully exploited to quantification analysis of the LFIA system, which is essentially nonlinear and dynamic. Therefore, this can provide a new method for the area of quantitative interpretation of LFIA system. ", "keywords": "lateral flow immunoassay;particle swarm optimization;differential evolution;non-homogeneous markov chain;immunochromatographic strip", "title": "A novel switching local evolutionary PSO for quantitative analysis of lateral flow immunoassay"} {"abstract": "This article develops a compositional vector-based semantics of subject and object relative pronouns within a categorical framework. Frobenius algebras are used to formalize the operations required to model the semantics of relative pronouns, including passing information between the relative clause and the modified noun phrase, as well as copying, combining, and discarding parts of the relative clause. We develop two instantiations of the abstract semantics, one based on a truth-theoretic approach and one based on corpus statistics.", "keywords": "computational linguistics;type logical grammars;distributional vector space semantics;compact closed categories;string diagrams pregroups", "title": "The Frobenius anatomy of word meanings I: subject and object relative pronouns"} {"abstract": "Based on the results of Xin (Commun. Pure Appl. Math. 51(3):229240, 1998), Zhang and Tan (Acta Math. Sin. Engl. Ser. 28(3):645652, 2012), we show the blow-up phenomena of smooth solutions to the non-isothermal compressible NavierStokesKorteweg equations in arbitrary dimensions, under the assumption that the initial density has compact support. Here the coefficients are generalized to a more general case which depends on density and temperature. Our work extends the previous corresponding results.", "keywords": "blow-up;compressible navierstokeskorteweg equations;", "title": "Blow-up of Compressible NavierStokesKorteweg Equations"} {"abstract": "Motivated by a problem faced by a multimedia entertainment retailer, we explore the problem of planning the design of a distributed database system. The problem consists of planning the design/expansion of the distributed database system by introducing new database servers and retiring possibly some existing ones in order to reduce telecommunication costs for processing user queries and server acquisition, operations and maintenance cost in a multiperiod environment where user processing demand varies over time. We develop a mathematical programming model and an effective solution approach to determine the best decisions regarding acquisition and retirement of database servers and assignment of user processing demand to the servers over time. Through a computational study, we investigate the impact of important parameters such as length of the planning horizon and demand growth on the solution quality and utilization of server capacity and examine the effectiveness of the solution approach in comparison with the commercial package LINDO. We also discuss some extensions to the problem as directions for future research.", "keywords": "planning;distributed database system;database servers;mathematical programming;heuristic", "title": "A coordinated planning model for the design of a distributed database system"} {"abstract": "In this paper, we discuss how a regression model, with a non-continuous response variable, which allows for dependency between observations, should be estimated when observations are clustered and measurements on the subjects are repeated. The cluster sizes are assumed to be large. We find that the conventional estimation technique suggested by the literature on generalized linear mixed models (GLMM) is slow and sometimes fails due to non-convergence and lack of memory on standard PCs. We suggest to estimate the random effects as fixed effects by generalized linear model and to derive the covariance matrix from these estimates. A simulation study shows that our proposal is feasible in terms of mean-square error and computation time. We recommend that our proposal be implemented in the software of GLMM techniques so that the estimation procedure can switch between the conventional technique and our proposal, depending on the size of the clusters.", "keywords": "monte carlo simulations;large sample;interdependence;cluster errors", "title": "Computationally feasible estimation of the covariance structure in generalized linear mixed models"} {"abstract": "Background: Patients with insulin dependent diabetes require frequent advice if their metabolic control is not optimal. This study focuses on the fiscal and administrative aspects of telemanagement, which was used to establish a supervised autonomy of patients on intensified insulin therapy. Methods: A prospective, randomised trial with 43 patients on intensified insulin therapy was conducted. Travelling distance to the diabetes centre was 50 min one way; all patients had undergone a diabetes education course with lessons in dose adaptation. Patients were randomly assigned to telecare (n = 27) or conventional care (n = 16). They used BG-meters with a storage capacity of 120 values (Precision QID(TM) Abbott/Medisense) and transmitted their data over a combined modem/interface via telephone line to the diabetes centre. Data were displayed and stored by a customised software (Precision Link Plus(TM), Abbott/Medisense). Advice for proper dose adjustment was given by telephone. Results: Average time needed for instruction in the telemedical system was 15 min. Data were transmitted every 1-3 weeks and a teleconsultation was performed by phone every 2-4 weeks, depending on the extent of specific problems. On average, personal visits in the control group were performed once a month. Physician's time expenditure for telemanagement, compared to conventional advice was moderately higher (50 vs. 42 min per month). A substantial amount of time on the patients side could be saved through replacing personal communications by telephone contacts and data transmission reduction (96 vs. 163 min/month including data transmission time). Setting up an optimal telemanagement scenario, a cost analysis was carried out yielding savings of 650 EURO per year per patient. HbA(1c) dropped significantly from 8.2 to 7.0% after 8 months of observation, but there was no significant difference between the intervention and control groups. Major technical problems with the telematic system did not occur during the study. Conclusions: Telemanagement of insulin-requiring diabetic patients is a cost and time saving procedure for the patients and results in metabolic control comparable to conventional outpatient management. ", "keywords": "telemedicine;diabetes;telecare;insulin therapy;glucose monitoring", "title": "Are there time and cost savings by using telemanagement for patients on intensified insulin therapy? A randomised, controlled trial"} {"abstract": "Massively-multiplayer online games (MMOs) are increasingly popular worldwide. MMO gaming can result in problematic Internet use (PIU; or Internet addiction), which is characterized by dysfunction in areas such as work or relationships. Because PIU in online gaming is increasingly seen in clinical populations, we explored PIU in the context of MMO gaming. Using a cross-sectional design, we sought to identify clinical and personality factors, as well as motivations for gaming, that differentiated between people who scored high or low on a measure of problematic Internet use. Subjects completed all study procedures via an online survey. Participants were 163 MMO users recruited from the community, from gaming websites, and from online forums. Subjects completed a series of demographic, mood, anxiety, and personality questionnaires. The study found that individuals in the high PIU group (n = 79) were more likely to have higher levels of social phobia (p = .000), state (p = .000) and trait (p = .000) anxiety, introversion (p = .000), neuroticism (p = .000) and absorption (p = .019) than individuals in the low-PIU group (n = 84). Different reasons for gaming also characterized the group with more problematic Internet use. Our findings provide support for the idea that high anxiety and absorption may be risk factors for problematic Internet use within the MMO gaming environment and suggest that gamers who endorse problematic Internet use identify different motivations for online gaming than gamers who do not.", "keywords": "internet addiction;anxiety;personality;online gaming", "title": "Clinical and Personality Correlates of MMO Gaming: Anxiety and Absorption in Problematic Internet Use"} {"abstract": "The conjugate gradient method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate gradient method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of LiuStorey conjugate gradient method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS method in practical computation.", "keywords": "unconstrained optimization;liustorey conjugate gradient method;global convergence", "title": "Convergence of LiuStorey conjugate gradient method"} {"abstract": "Packet classification is implemented in modern network routers for providing differentiated services based on packet header information. Traditional packet classification only reports a single matched rule with the highest priority for an incoming packet and takes an action accordingly. With the emergence of new Internet applications such as network intrusion detection system, all matched rules need to be reported. This multi-match problem is more challenging and is attracting attentions in recent years. Because of the stringent time budget on classification, architectural solutions using ternary content addressable memory (TCAM) are the preferred choice for backbone network routers. However, despite its advantage on search speed, TCAM is much more expensive than SRAM, and is notorious for its extraordinarily high power consumption. These problems limit the application and scalability of TCAM-based solutions. This paper presents a tree-based multi-match packet classification technique combining the benefits of both TCAMs and SRAMs. The experiments show that the proposed solution achieves significantly more savings on both memory space and power consumption on packet matching compared to existing solutions.", "keywords": "network router;packet classification;multi-match;ternary content addressable memory ;network intrusion detection system ", "title": "A space- and power-efficient multi-match packet classification technique combining TCAMs and SRAMs"} {"abstract": "The authors consider second-order difference equations of the type Delta((Deltay(n))(alpha)) + q(n)y(sigma (n))(alpha) =0, (E) where alpha > 0 is the ratio of odd positive integers, {q(n)} is a positive sequence, and {sigma (n)} is a positive increasing sequence of integers with sigma (n) --> infinity as n --> infinity. They give some oscillation and comparison results for equation (E). ", "keywords": "comparison theorems;difference equations;half-linear equations;second-order;oscillation", "title": "Oscillation and comparison theorems for half-linear second-order difference equations"} {"abstract": "This paper presents a method to automate the process of surface scanning using optical range sensors and based on a priori known information from a CAD model. A volumetric model implemented through a 3D voxel map is generated from the object CAD model and used to define a sensing plan composed of a set of viewpoints and the respective scanning trajectories. Surface coverage with high data quality and scanning costs are the main aspects in sensing plan definition. A surface following scheme is used to define collision free and efficient scanning path trajectories. Results of experimental tests performed on a typical industrial scanning system with 5 dof are shown. ", "keywords": "automatic surface scanning;viewpoint set computation;optical range sensors;cad model;next best viewpoint;range data", "title": "Automated 3D surface scanning based on CAD model"} {"abstract": "Tracking moving targets is one of the important problems of wireless sensor networks. We have considered a sensor network where numerous sensor nodes are spread in a grid like manner. These sensor nodes are capable of storing data and thus act as a separate datasets. The entire network of these sensors act as a set of distributed datasets. Each of these datasets has its local temporal dataset along with Spatial data. and the geographical coordinates of a, given object or target. In this paper an algorithm is introduced that mines global temporal patterns from these datasets and results in the discovery of linear or nonlinear trajectories of moving objects tinder supervision. The main objective here is to perforin in-network aggregation between the data contained in the various datasets to discover global spatio-temporal patterns; the main constraint is that there should be minimal communication among the participating nodes. We present the algorithm and analyze it in terms of the communication costs.", "keywords": "target tracking;sensor networks;in-network aggregation;spatio-temporal mining", "title": "A NEW MECHANISM FOR TRACKING A MOBILE TARGET USING GRID SENSOR NETWORKS"} {"abstract": "Simulation-based educational products are excellent set of illustrative tools that proffer features like visualization of the dynamic behavior of a real system, etc. Such products have great efficacy in education and are known to be one of the first-rate student centered learning methodologies. These products allow students to practice skills such as critical thinking and decision-making. In this paper, a case is presented where a scenario-based e-learning product namely 'supply chain simulator' is developed at KFUPM for an introductory technology course. The product simulates a supply chain - a network of facilities and distribution systems that carries out the task of procurement and transformation of materials from manufacturer to customer. The product was put to test during four semesters and results of the survey conducted by the instructors and the students are presented. The results clearly suggest the benefits of using such a tool in enhancing student learning. ", "keywords": "scenario-based e-learning;teaching/learning strategies;interactive learning environments;active learning;supply chain", "title": "Supply chain simulator: A scenario-based educational tool to enhance student learning"} {"abstract": "Traceability codes are used in schemes that prevent illegal redistribution of digital content. In this Letter, we use Chinese Reminder Theorem codes to construct traceability codes. Both the code parameters and the traitor identification process take into account the non-uniformity of the alphabet of Chinese Reminder Theorem codes. Moreover it is shown that the identification process can be done in polynomial time using list decoding techniques.", "keywords": "fingerprinting;traitor tracing;chinese reminder theorem", "title": "Obtaining traceability codes from Chinese Reminder Theorem codes"} {"abstract": "We propose a learning and control model of the arm for a loading task in which an object is loaded onto one hand with the other hand, in the sagittal plane. Postural control during object interactions provides important points to motor control theories in terms of how humans handle dynamics changes and use the information of prediction and sensory feedback. For the learning and control model, we coupled a feedback-error-learning scheme with an Actor-Critic method used as a feedback controller. To overcome sensory delays, a feedforward dynamics model (FDM) was used in the sensory feedback path. We tested the proposed model in simulation using a two-joint arm with six muscles, each with time delays in muscle force generation. By applying the proposed model to the loading task, we showed that motor commands started increasing, before an object was loaded on, to stabilize arm posture. We also found that the FDM contributes to the stabilization by predicting how the hand changes based on contexts of the object and efferent signals. For comparison with other computational models, we present the simulation results of a minimum-variance model.", "keywords": "motor control;fdm;loading;actor-critic;feedback-error-learning", "title": "Learning and Control Model of the Arm for Loading"} {"abstract": "We consider a finite-capacity single-server vacation model with close-down/setup times and Markovian arrival processes (MAP). The queueing model has potential applications in classical IP over ATM or IP switching systems, where the close-down time corresponds to an inactive timer and the setup time to the time delay to set up a switched virtual connection (SVC) by the signaling protocol. The vacation time may be considered as the time period required to release an SVC or as the time during which the server goes to set up other SVCs. By using the supplementary variable technique, we obtain the queue length distribution at an arbitrary instant, the loss probability, the setup rate, as well as the Laplace-Stieltjes transforms of both the virtual and actual waiting time distributions.", "keywords": "markovian arrival process ;finite capacity queue;vacation;setup time;close-down time;supplementary variable method", "title": "A finite-capacity queue with exhaustive vacation/close-down/setup times and Markovian arrival processes"} {"abstract": "Appropriately designing the proposal kernel of particle filters is an issue of significant importance, since a bad choice may lead to deterioration of the particle sample and, consequently, waste of computational power. In this paper we introduce a novel algorithm adaptively approximating the so-called optimal proposal kernel by a mixture of integrated curved exponential distributions with logistic weights. This family of distributions, referred to as mixtures of experts, is broad enough to be used in the presence of multi-modality or strongly skewed distributions. The mixtures are fitted, via online-EM methods, to the optimal kernel through minimisation of the Kullback-Leibler divergence between the auxiliary target and instrumental distributions of the particle filter. At each iteration of the particle filter, the algorithm is required to solve only a single optimisation problem for the whole particle sample, yielding an algorithm with only linear complexity. In addition, we illustrate in a simulation study how the method can be successfully applied to optimal filtering in nonlinear state-space models.", "keywords": "optimal proposal kernel;adaptive algorithms;kullback-leibler divergence;coefficient of variation;expectation-maximisation;particle filter;sequential monte carlo;shannon entropy", "title": "Adaptive sequential Monte Carlo by means of mixture of experts"} {"abstract": "Personalization services in a ubiquitous computing environment-ubiquitous personalization services computing-are expected to emerge in diverse environments. Ubiquitous personalization must address limited computational power of personal devices and potential privacy issues. Such characteristics require managing and maintaining a client-side recommendation model for ubiquitous personalization. To implement the client-side recommendation model, this paper proposes Buying-net, a customer network in ubiquitous shopping spaces. Buying-net is operated in a community, called the Buying-net space, of devices, customers, and services that cooperate together to achieve common goals. The customers connect to the Buying-net space using their own devices that contain software performing tasks of learning the customers' preferences, searching for similar customers for network formation, and generating recommendation lists of items. Buying-net attempts to improve recommendation accuracy with less computational time by focusing on local relationship of customers and newly obtained information. We experimented with such customer networks in the area of multimedia content recommendation and validated that Buying-net outperformed a typical collaborative-filtering-based recommender system on accuracy as well as computational time. This shows that Buying-net has good potential to be a system for ubiquitous shopping.", "keywords": "mobile commerce;recommender systems;ubiquitous computing;ubiquitous personalization services", "title": "Personalized Recommendation over a Customer Network for Ubiquitous Shopping"} {"abstract": "A two-species Lotka-Volterra type competition model with stage structures for both species is proposed and investigated. In our model, the individuals of each species are classified as belonging either the immature or the mature. First, we consider the stage-structured model with constant coefficients. By constructing suitable Lyapunov functions, sufficient conditions are derived for the global stability of nonnegative equilibria of the proposed model. It is shown that three typical dynamical behaviors (coexistence, bistability, dominance) are possible in stage-structured competition model. Next, we consider the stage-structured competitive model in which the coefficients are assumed to be positively continuous periodic functions. By using Gaines and Mawhin's continuation theorem of coincidence degree theory, a set of easily verifiable sufficient conditions are obtained for the existence of positive periodic solutions to the model. Numerical simulations are also presented to illustrate the feasibility of our main results. ", "keywords": "stage structure;competition;global stability;periodic solution", "title": "Modelling and analysis of a competitive model with stage structure"} {"abstract": "One of the most important queries in spatio-temporal databases that aim at managing moving objects efficiently is the continuous K-nearest neighbor (CKNN) query. A CKNN query is to retrieve the K-nearest neighbors (KNNs) of a moving user at each time instant within a user-given time interval [t s , t e ]. In this paper, we investigate how to process a CKNN query efficiently. Different from the previous related works, our work relieves the past assumption, that an object moves with a fixed velocity, by allowing that the velocity of the object can vary within a known range. Due to the introduction of this uncertainty on the velocity of each object, processing a CKNN query becomes much more complicated. We will discuss the complications incurred by this uncertainty and propose a cost-effective P2 KNN algorithm to find the objects that could be the KNNs at each time instant within the given query time interval. Besides, a probability-based model is designed to quantify the possibility of each object being one of the KNNs. Comprehensive experiments demonstrate the efficiency and the effectiveness of the proposed approach.", "keywords": "continuous k-nearest neighbor query;k-nearest neighbors;moving objects;moving query object;spatio-temporal databases", "title": "Continuous K-Nearest Neighbor Query for Moving Objects with Uncertain Velocity"} {"abstract": "The detection of events is essential to high-level semantic querying of video databases. It is also a very challenging problem requiring the detection and integration of evidence for an event available in multiple information modalities, such as audio, video and language. This paper focuses on the detection of specific types of events, namely, topic of discussion events that occur in classroom/lecture environments. Specifically, we present a query-driven approach to the detection of topic of discussion events with foils used in a lecture as a way to convey a topic. In particular, we use the image content of foils to detect visual events in which the foil is displayed and captured in the video stream. The recognition of a foil in video frames exploits the color and spatial layout of regions on foils using a technique called region hashing. Next, we use the textual phrases listed on a foil as an indication of a topic, and detect topical audio events as places in the audio track where the best evidence for the topical phrases was heard. Finally, we use a probabilistic model of event likelihood to combine the results of visual and audio avent detection that exploits their time cooccurrence. The resulting identification of topical events is evaluated in the domain of classroom lectures and talks.", "keywords": "color;video;hashing;query-driven topic detection;use;digital video;topic of discussion events;slide detection;event;topical audio events;layout;timing;evidence;paper;informal;audio;place;spatial;video stream;contention;visualization;detection;environments;semantic;modal;region;exploit;recognition;multi-modal fusion;language;identification;types;queries;image;probabilistic models;tracking;database;integrability;query", "title": "detecting topical events in digital video"} {"abstract": "In this paper we investigate guessing number, a relatively new concept linked to network coding and certain long standing open questions in circuit complexity. Here we study the bounds and a variety of properties concerning this parameter. As an application, we obtain the lower and upper bounds for shift graphs, a subclass of directed circulant graphs.", "keywords": "guessing number;shift graph;network coding", "title": "On the guessing number of shift graphs"} {"abstract": "Computer-based geometry systems have been widely used for teaching and learning, but largely based on mouse-and-keyboard interaction, these systems usually require users to draw figures by following strict task structures defined by menus, buttons, and mouse and keyboard actions. Pen-based designs offer a more natural way to develop geometry theorem proofs with hand-drawn figures and scripts. This paper describes a pen-based geometry theorem proving system that can effectively recognize hand-drawn figures and hand-written proof scripts, and accurately establish the correspondence between geometric components and proof steps. Our system provides dynamic and intelligent visual assistance to help users understand the process of proving and allows users to manipulate geometric components and proof scripts based on structures rather than strokes. The results from evaluation study show that our system is well perceived and users have high satisfaction with the accuracy of sketch recognition, the effectiveness of visual hints, and the efficiency of structure-based manipulation.", "keywords": "geometry theorem proving;hand-drawn figures;hand-written proof scripts;structure based manipulation;recognition", "title": "intelligent understanding of handwritten geometry theorem proving"} {"abstract": "In this paper, a computational framework for patient-specific preoperative planning of robotics-assisted minimally invasive cardiac surgery (RAMICS) is presented. It is expected that the preoperative planning of RAMICS will improve the success rate by considering robot kinematics, patient-specific thoracic anatomy, and procedure-specific intraoperative conditions. Given the significant anatomical features localized in the preoperative computed tomography images of a patient's thorax, port locations, and robot orientations (with respect to the patient's body coordinate frame) are determined to optimize qualities such as dexterity, reachability, tool approach angles, and maneuverability. To address intraoperative geometric uncertainty, the problem is formulated as a generalized semi-infinite program (GSIP) with a convex lower-level problem to seek a plan that is less sensitive to geometric uncertainty in the neighborhood of surgical targets. It is demonstrated that with a proper formulation of the problem, the GSIP can be replaced by a tractable constrained nonlinear program that uses a multicriteria objective function to balance between the nominal task performance and robustness to collisions and joint limit violations. Finally, performance of the proposed formulation is demonstrated by a comparison between the plans generated by the algorithm and those recommended by an experienced surgeon for several case studies.", "keywords": "medical robotics;planning under uncertainty;port placement;preoperative planning", "title": "A Semi-Infinite Programming Approach to Preoperative Planning of Robotic Cardiac Surgery Under Geometric Uncertainty"} {"abstract": "The latency of the IEEE 802.11 handoff process in wireless local area network (WLAN) is much higher than 50 ms. Since the bearable maximum delay is 50 ms. Since the bearable maximum delay 50 ms in multimedia applications, e.g., voice over IP (VOIP), such large handoff gap may bring up excessive jitter. Therefore, many researches make much effort about how to fast handoff. In this paper, we propose an accelerated handoff mechanism in which three methods are involved: (1) dynamic cluster chain, (2) PMSKA caching, and (3) fast reassociation with the pairwise transient key security association (PTKSA) establishment. Access points (APs) are arranged as a cluster for each client station (STA). APs that are cluster members can cache PMKSA of STA in advance to reduce the extensible authentication protocol-transport Laver Security (EAP-TLS) authentication delay. the dynamic cluster chain which is arranged by a dynamic cluster selection and transition method, is proposed to assure that STA stays within a cluster. Furthermore, the fast reassociation with the PTSA establishment process incorporates four-way handshake into the IEEE 802.11 reassociation process to further accelerate handoff process. ", "keywords": "authentication;bss transition;cluster;cluster roaming key;handoff;wlan", "title": "An accelerated IEEE 802.11 handoff process based on the dynamic cluster chain method"} {"abstract": "A proper coloring of the edges of a graph G is called acyclic if there is no two-colored cycle in G. The acyclic edge chromatic number of G, denoted by a'(G), is the least number of colors in an acyclic edge coloring of G. For certain graphs G, a'(G) greater than or equal to Delta(G) + 2 where Delta(G) is the maximum degree in G. It is known that a'(G) less than or equal to Delta + 2 for almost all Delta-regular graphs, including all Delta-regular graphs whose girth is at least cDelta log Delta. We prove that determining the acyclic edge chromatic number of an arbitrary graph is an NP-complete problem, ror graphs G with sufficiently large girth in terms of Delta(G), we present deterministic polynomial-time algorithms that color the edges of G acyclically using at most Delta(G) + 2 colors.", "keywords": "acyclic edge coloring;girth", "title": "Algorithmic aspects of acyclic edge colorings"} {"abstract": "In the paper an improved element free Galerkin method is presented for heat conduction problems with heat generation and spatially varying conductivity. In order to improve computational efficiency of meshless method based on Galerkin weak form, the nodal influence domain of meshless method is extended to have arbitrary polygon shape. When the dimensionless size of the nodal influence domain approaches 1, the Gauss quadrature point only contributes to those nodes in whose background cell the Gauss quadrature point is located. Thus, the bandwidth of global stiff matrix decreases obviously and the node search procedure is also avoided. Moreover, the shape functions almost possess the Kronecker delta function property, and essential boundary conditions can be implemented without any difficulties. Numerical results show that arbitrary polygon shape nodal influence domain not only has high computational accuracy, but also enhances computational efficiency of meshless method greatly.", "keywords": "meshless method;heat conduction;spatial varying conductivity;computational efficiency;interpolation property", "title": "An improved meshless method with almost interpolation property for isotropic heat conduction problems"} {"abstract": "Animated shape transformations should be an intrinsic part of visual cyberworlds. However, quite often only limited animation of the polygon-based shapes can be found there, specifically when using the virtual reality modeling language (VRML) and its successor extensible 3D (X3D). This greatly limits the expressive power of visual cyberworlds and has motivated our research in this direction. In this paper, we present function-based extensions of VRML and X3D, which allow for time-dependent shape modeling on the web. Our shape modeling approach is based on the concurrent use of implicit, explicit and parametric functions defining geometry, appearance and their transformations through time. The functions are typed straight in VRML/X3D code as individual formulas and as function scripts. We have also developed a web enabled interactive software tool for modeling function-based VRML/X3D objects.", "keywords": "function-based shape modeling;computer animation;3d web visualization;vrml;x3d", "title": "Function-defined shape metamorphoses in visual cyberworlds"} {"abstract": "This paper deals in crosstalk analysis of a CMOS-gate-driven capacitively and inductively coupled interconnect. Alpha power-law model of a MOS transistor is used to represent a CMOS driver. This is combined with a transmission-line-based coupled-interconnect model to develop a composite driver-interconnect-load model for analytical purposes. On this basis, a transient analysis of crosstalk noise is carried out. Comparison of the analytical results with SPICE extracted results shows that the average error involved in estimating noise peak and their time of occurrence is less than 7%.", "keywords": "coupling;crosstalk noise;inductance;integrated-circuit interconnect;signal integrity;transmission lines", "title": "Crosstalk analysis for a CMOS-gate-driven coupled interconnects"} {"abstract": "One common approach in hierarchical text classification involves associating classifiers with nodes in the category tree and classifying text documents in a top-down manner. Classification methods using this top-down approach can scale well and cope with changes to the category trees. However, all these methods suffer from blocking which refers to documents wrongly rejected by the classifiers at higher-levels and cannot be passed to the classifiers at lower-levels. In this paper, we propose a classifier-centric performance measure known as blocking factor to determine the extent of the blocking. Three methods are proposed to address the blocking problem, namely, Threshold Reduction, Restricted Voting, and Extended Multiplicative. Our experiments using Support Vector Machine (SVM) classifiers on the Reuters collection have shown that they all could reduce blocking and improve the classification accuracy. Our experiments have also shown that the Restricted Voting method delivered the best performance.", "keywords": "data mining;text mining;classification", "title": "Blocking reduction strategies in hierarchical text classification"} {"abstract": "Ellipsoid estimation is important in many practical areas such as control, system identification, visual/audio tracking, experimental design, data mining, robust statistics and statistical outlier or novelty detection. A new method, called kernel minimum volume covering ellipsoid (KMVCE) estimation, that finds an ellipsoid in a kernel-defined feature space is presented. Although the method is very general and can be applied to many of the aforementioned problems, the main focus is on the problem of statistical novelty/outlier detection. A simple iterative algorithm based on Mahalanobis-type distances in the kernel-defined feature space is proposed for practical implementation. The probability that a non-outlier is misidentified by our algorithms is analyzed using bounds based on Rademacher complexity. The KMVCE method performs very well on a set of real-life and simulated datasets, when compared with standard kernel-based novelty detection methods.", "keywords": "minimum volume covering ellipsoid;rademacher complexity;kernel methods;outlier detection;novelty detection", "title": "Kernel ellipsoidal trimming"} {"abstract": "Building hazard assessment prior to earthquake occurrence exposes interesting problems especially in earthquake prone areas. Such an assessment provides an early warning system for building owners as well as the local and central administrators about the possible hazards that may occur in the next scenario earthquake event, and hence pre- and post-earthquake preparedness can be arranged according to a systematic program. For such an achievement, it is necessary to have efficient models for the prediction of hazard scale of each building within the study area. Although there are subjective intensity index methods for such evaluations, the objective of this paper is to propose a useful tool through fuzzy logic (FL) to classify the buildings that would be vulnerable to earthquake hazard. The FL is a soft computing intelligent reasoning methodology, which is rapid, simple and easily applicable with logical and rational association between the building-hazard categories and the most effective factors. In this paper, among the most important factors are the story number (building height), story height ratio, cantilever extension ratio, moment of inertia (stiffness), number of frames, column and shear wall area percentages. Their relationships with the five hazard categories are presented through a supervised hazard center classification method. These five categories are none, slight, moderate, extensive, and complete hazard classes. A new supervised FL classification methodology is proposed similar to the classical fuzzy c-means procedure for the allocation of hazard categories to individual buildings. The application of the methodology is presented for Zeytinburnu quarter of Istanbul City, Turkey. It is observed that out of 747 inventoried buildings 7.6%, 50.0%, 14.6%, 20.1%, and 7.7% are subject to expected earthquake with none, slight, moderate, extensive, and complete hazard classes, respectively.", "keywords": "earthquake;hazard;categories;moment of inertia;stiffness;supervised fuzzy model", "title": "Supervised fuzzy logic modeling for building earthquake hazard assessment"} {"abstract": "Image segmentation partitions an image into nonoverlapping regions, which ideally should be meaningful for a certain purpose. Thus, image segmentation plays an important role in many multimedia applications. In recent years, many image segmentation algorithms have been developed, but they are often very complex and some undesired results occur frequently. By combination of Fuzzy Support Vector Machine (FSVM) and Fuzzy C-Means (FCM), a color texture segmentation based on image pixel classification is proposed in this paper. Specifically, we first extract the pixel-level color feature and texture feature of the image via the local spatial similarity measure model and localized Fourier transform, which is used as input of FSVM model (classifier). We then train the FSVM model (classifier) by using FCM with the extracted pixel-level features. Color image segmentation can be then performed through the trained FSVM model (classifier). Compared with three other segmentation algorithms, the results show that the proposed algorithm is more effective in color image segmentation.", "keywords": "image segmentation;fuzzy support vector machine;fuzzy c-means;local spatial similarity measure model;localized angular phase", "title": "Color texture segmentation based on image pixel classification"} {"abstract": "In this paper, we present MuSeQoR: a new multi-path routing, protocol that tackles the twin issues of reliability (protection against failures Of Multiple paths) and security, while ensuring minimum data redundancy. Unlike ill all the previous studies. reliability is addressed in the context of both erasure and corruption channels. We also quantify the security of the protocol in terms of the number of eavesdropping nodes. The reliability and security requirements of a session are specified by a User and are related to the parameters of the protocol adaptively. This relationship is of central importance and shows how the protocol attempts to simultaneously achieve reliability and security. In addition. by using optimal coding schemes and by dispersing the original data. we minimize the redundancy. Finally, extensive simulations were performed to assess the performance of the protocol Under varying network conditions. The simulation Studies clearly indicate the gains in using Such a protocol and also highlight the enormous flexibility of the protocol. ", "keywords": "multi-path routing;qos;security;dispersity routing;diversity coding;erasure channel;corruption channel;ad hoc wireless networks", "title": "MuSeQoR: Multi-path failure-tolerant security-aware QoS routing in ad hoc wireless networks"} {"abstract": "We discuss the approximation of the mean of autocorrelated data under contaminations of different types. Many robust location estimators have been investigated carefully for independent data, but their properties have not been studied in detail under dependencies. We pay attention to estimators based on subranges like minimum volume ellipsoid and minimum covariance determinant estimators, mid-ranges and trimmed means, of which the sample mean and median are special cases, and also include the Hodges - Lehmann and the Bickel - Hodges estimators. Our interest is in small to moderate sample sizes.", "keywords": "time series;robustness;additive outliers;innovative outliers;sensitivity function;bias curve", "title": "Robust location estimation under dependence"} {"abstract": "The continuing and widespread use of lattice rules for high-dimensional numerical quadrature is driving the development of a rich and detailed theory. Part of this theory is devoted to computer searches for rules, appropriate to particular situations. In some applications, one is interested in obtaining the (lattice) rank of a lattice rule Q(Lambda) directly from the elements of a generator matrix B (possibly in upper triangular lattice form) of the corresponding dual lattice Lambda(perpendicular to). We treat this problem in detail, demonstrating the connections between this (lattice) rank and the conventional matrix rank deficiency of modulo p versions of B.", "keywords": "lattice rules;rank;integration lattice", "title": "Determination of the rank of an integration lattice"} {"abstract": "Cell-phone data are used to measure passenger flows in the underground part of Paris transit system. Travel times, trains level of occupancy and origindestination flows are measured. The measures are consistent with field observations, and with estimates from automated fare collection data. Having an independent real-time measure of train occupancy can be beneficial to the quality of the system.", "keywords": "quality of service;transit network;cellular phone data", "title": "Using cell phone data to measure quality of service and passenger flows of Paris transit system"} {"abstract": "This paper proposes a genetic algorithm (GA) based heuristic to the multi-period fixed charge distribution problem associated with backorder and inventories. The objective is to determine the size of the shipments, backorder and inventories at each period, so that, the total cost incurred during the entire period towards transportation, backorder and inventories is minimum. The model is formulated as pure integer nonlinear programming and 01 mixed integer linear programming problems, and proposes a GA based heuristic to provide solution to the above problem. The proposed GA based heuristic is evaluated by comparing their solutions with lower bound, LINGO solver and approximate solutions. The comparisons reveal that the GA generates better solutions than the approximate solutions, and is capable of providing solutions equal to LINGO solutions and closer to the lower bound value of the problems.", "keywords": "genetic algorithm;multi-period distribution problem;fixed charge", "title": "A genetic algorithm based heuristic to the multi-period fixed charge distribution problem"} {"abstract": "Coronary heart diseases (CHD) are one of the main causes of deaths in the United States. Although it is well known that CHD mainly occurs due to blocked arteries, many of the specifics of this disease are still subject to current research. It is commonly accepted that certain factors, such as a cholesterol high diet, increase the risk of coronary heart disease. As a consequence, people should be educated to adhere a diet low in low-density lipoprotein (LDL or bad cholesterol). In order for children to become familiar with these facts, educational, explorative computer systems can be employed to raise some awareness. This poster describes an educational computer system for children that serves this purpose. While practicing their navigation skills, the children can learn about the various types of blood cells and particles within the blood stream. A geometric model of the arterial vascular system of the heart has been developed, which considers vessels of different sizes. An interactive fly-through using a standard game controller facilitates the exploration of the interior structure of the vasculature. A blood flow simulation including several different particles within the blood stream allows the young explorer to understand their functionality. This system has been deployed as an interactive museum exhibit for children. The primary age group addressed by the science museum where it is currently being displayed is 4-9 years. With proper guidance by the museum personnel and the instructional material provided at the exhibit the game is also suitable for slightly younger and much older children.The implemented system simulates a submarine-style navigation through the blood stream inside an arterial vascular tree of a heart. The vasculature is based on a computed tomography (CT) scan of a pig's heart. The user has full control over the navigation by using a Logitech WingMan Cordless Rumblepad as input device. This controller provides two analog joysticks that can be used to achieve six-degrees-of-freedom input. In this application, the user controls forward and backward movement (acceleration and deceleration) with the left joystick while changing the orientation (left, right, up and down) by using the right joystick. Collision detection with the vessel walls ensures that the vasculature cannot be left. On collision with the vessel wall as well as with any of the particles within the blood stream force feedback is provided by using the rumble feature of the input device. In addition, audio feedback with different types of sounds allows the player to distinguish between the different types of collisions. Consequently, the user has complete manual control over the navigation while visual, audio, and force feedback provided by the system results in an easy to understand assessment of what is happening. This is especially important since the targeted audience are children of a relatively young age.The software is scalable in terms of the physical size of the blood vessel systems and the amount of geometry data that is used to represent it. It can be ported to various virtual environments (VEs). At this point, it has been tested on a regular desktop computer and a large projection screen at the museum site. Especially the projection screen, which was used for the interactive exhibit, allows a user to fully immerse herself/himself into the scene. Overall, this computer system gives hands-on experience of the functions of the circulatory system of the heart and exposes the user to the various particles present in the blood stream. As a museum exhibit, it was very well received by the targeted audience, i.e., by children between the age of four and nine, and beyond. The learning experience in the virtual environment was validated in a conversation during a complementing stage performance, which included a scientist dissecting a real pig's heart, where the children were asked to identify anatomical parts and discuss the importance of the circulatory system.", "keywords": "fly-through;educational computer game;cardiovascular;biomedical visualization;navigation", "title": "an explorational exhibit of a pig's heart"} {"abstract": "This paper discusses some of the leading concepts in education reform and their need for technical standards. It also discusses the efforts by several organizations to develop such standards and specifications, and how new stakeholders can get involved or monitor this work. There are a variety of reform efforts being advocated and pursued by researchers, educators, learning institutions, corporate trainers, and government leaders. Concepts such as student-centered learning, computer-based training, on-line learning, distance learning, just-in-time learning, and self-learning are widely accepted as having the potential to substantially improve the efficiency and effectiveness of learning. All of these concepts have the need for one or more underlying technical standards. This paper describes a number of these standards and the work that is being done to develop them. It is important to note that these technical standards are independent from the content material (known as content standards) that students would be required to learn, as well as from the amount of that content (known as performance standards) a student would be expected to master.", "keywords": "education;computer based learning;technical standards", "title": "Education reform and its needs for technical standards"} {"abstract": "We consider a static divergent two-stage supply chain with one distributor and many retailers. The unsatisfied demands at the retailers end are treated as lost sales, whereas the unsatisfied demand is assumed to be backlogged at the distributor. The distributor uses an inventory rationing mechanism to distribute the available on-hand inventory among the retailers, when the sum of demands from the retailers is greater than the on-hand inventory at the distributor. The present study aims at determining the best installation inventory control-policy or order-policy parameters such as the base-stock levels and review periods, and inventory rationing quantities, with the objective of minimizing the total supply chain costs (TSCC) consisting of holding costs, shortage costs and review costs in the supply chain over a finite planning horizon. An exact solution procedure involving a mathematical programming model is developed to determine the optimum TSCC, base-stock levels, review periods and inventory rationing quantities (in the class of periodic review, order-up-to S policy) for the supply chain model under study. On account of the computational complexity involved in optimally solving problems over a large finite time horizon, a genetic algorithm (GA) based heuristic methodology is presented.", "keywords": "divergent supply chain;lost sales;inventory rationing;base-stock levels;periodic review periods;allocation rules;mathematical programming model;genetic algorithm", "title": "Rationing mechanisms and inventory control-policy parameters for a divergent supply chain operating with lost sales and costs of review"} {"abstract": "The Virtual Element method allows for meshes made up by arbitrary polygonal elements. Guaranteed local and global conformity with no alteration of the geometry of the DFN. Unconstrained fracture-independent meshing. Application of domain decomposition preconditioners.", "keywords": "vem;fracture flows;darcy flows;discrete fracture networks", "title": "A globally conforming method for solving flow in discrete fracture networks using the Virtual Element Method"} {"abstract": "Because of the energy shortage and energy price rise, energy efficiency becomes a worldwide hot spot problem. It is not only a problem about cost reduction, but also a great contribute to the environmental protection. However, the energy efficiency was always ignored in the past decades. In order to gain more benefit and become more competitive in the market, energy efficiency should be considered as an essential factor in early planning phase. To overcome these problems, a new approach, which introduces energy efficiency as a key criterion into the planning process, is presented in this article. An energy recovery network is built according to the analysis of process and product demands. Afterwards the energy loss of the whole system, transport performance and space demand are simultaneously taken into account with the purpose of finding good facility planning from both energy and economic aspects. Finally, a practical expanding case is used to validate the correctness and effectiveness of the proposed approach.", "keywords": "energy efficiency;facility planning;multi objective optimization;local search", "title": "Multi-objective optimization of facility planning for energy intensive companies"} {"abstract": "The existing margin-based discriminant analysis methods such as nonparametric discriminant analysis use K-nearest neighbor (K-NN) technique to characterize the margin. The manifold learning-based methods use K-NN technique to characterize the local structure. These methods encounter a common problem, that is, the nearest neighbor parameter K should be chosen in advance. How to choose an optimal K is a theoretically difficult problem. In this paper, we present a new margin characterization method named sparse margin-based discriminant analysis (SMDA) using the sparse representation. SMDA can successfully avoid the difficulty of parameter selection. Sparse representation can be considered as a generalization of K-NN technique. For a test sample, it can adaptively select the training samples that give the most compact representation. We characterize the margin by sparse representation. The proposed method is evaluated by using AR, Extended Yale B database, and the CENPARMI handwritten numeral database. Experimental results show the effectiveness of the proposed method; its performance is better than some other state-of-the-art feature extraction methods.", "keywords": "sparse margin;dimensional reduction;feature extraction", "title": "Sparse margin-based discriminant analysis for feature extraction"} {"abstract": "We study a variant of the Cont-Bouchaud model, which utilizes the percolation approach of multi-agent simulations of the stock market fluctuations. Here, instead of considering the relative price change as the difference of the total demand and total supply, we consider the relative price change to be proportional to the \"relative\" difference of demand and supply (the ratio of the difference in total demand and total supply to the sum of the total demand and total supply). We then study the probability distribution of the price changes.", "keywords": "econophysics;monte carlo;simulation;cont-bouchaud model", "title": "Market application of the percolation model: Relative price distribution"} {"abstract": "The Morris water maze is an experimental procedure in which animals learn to escape swimming in a pool using environmental cues. Despite its success in neuroscience and psychology for studying spatial learning and memory, the exact mnemonic and navigational demands of the task are not well understood. Here, we provide a mathematical model of rat swimming dynamics on a behavioural level. The model consists of a random walk, a heading change and a feedback control component in which learning is reflected in parameter changes of the feedback mechanism. The simplicity of the model renders it accessible and useful for analysis of experiments in which swimming paths are recorded. Here, we used the model to analyse an experiment in which rats were trained to find the platform with either three or one extramaze cue. Results indicate that the 3-cues group employs stronger feedback relying only on the actual visual input, whereas the 1-cue group employs weaker feedback relying to some extent on memory. Because the model parameters are linked to neurological processes, identifying different parameter values suggests the activation of different neuronal pathways.", "keywords": "autoregression;dynamic modelling;learning and memory;random walk;navigation;spatial memory;water maze;autocorrelation;autoregressive model", "title": "Feedback control strategies for spatial navigation revealed by dynamic modelling of learning in the Morris water maze"} {"abstract": "A great number of biological experiments show that gamma oscillation occurs in many brain areas after the presentation of stimulus. The neural systems in these brain areas are highly heterogeneous. Specifically, the neurons and synapses in these neural systems are diversified; the external inputs and parameters of these neurons and synapses are heterogeneous. How the gamma oscillation generated in such highly heterogeneous networks remains a challenging problem. Aiming at this problem, a highly heterogeneous complex network model that takes account of many aspects of real neural circuits was constructed. The network model consists of excitatory neurons and fast spiking interneurons, has three types of synapses (GABAA, AMPA, and NMDA), and has highly heterogeneous external drive currents. We found a new regime for robust gamma oscillation, i.e. the oscillation in inhibitory neurons is rather accurate but the oscillation in excitatory neurons is weak, in such highly heterogeneous neural networks. We also found that the mechanism of the oscillation is a mixture of interneuron gamma (ING) and pyramidal-interneuron gamma (PING). We explained the mixture ING and PING mechanism in a consistent-way by a compound post-synaptic current, which has a slowly rising-excitatory stage and a sharp decreasing-inhibitory stage.", "keywords": "gamma oscillation;heterogeneity;synapse;balanced networks", "title": "A new regime for highly robust gamma oscillation with co-exist of accurate and weak synchronization in excitatoryinhibitory networks"} {"abstract": "CdS/CdTe thin films with 2.1(upmu hbox {m}) thickness were grown using R.F. magnetron sputtering in two different mixtures of Ar and (hbox {O}_{2}). The substrate was a commercially available Pilkington glass with TCO deposited. The concentration of (hbox {O}_{2}) was selected to be 0, 1 and 5%. The crystallographic, morphological, optical and electrical properties of the as-deposited samples were compared with the ones treated with (hbox {CdCl}_{2}) and subsequently annealed at high temperature. The films morphology and crystallinity were studied by X-ray diffraction and scanning electron microscopy. X-ray diffraction shows a transition of zinc blend cubic phase to hexagonal as the oxygen content increases from 0 to 5%. The measurements show the larger band gap and grain sizes for the films with higher oxygen content. The band gap and transmission rate of the O(_2)-free and oxygenated devices is different and the grains size is greatly affected by the oxygen content.", "keywords": "cdte thin film;oxygen incorporation;sem;x-ray diffraction", "title": "Oxygen incorporation into CdS/CdTe thin film solar cells"} {"abstract": "Modern programming environments provide extensive support for inspecting, analyzing, and testing programs based on the algorithmic structure of a program. Unfortunately, support for inspecting and understanding runtime data structures during execution is typically much more limited. This paper provides a general purpose technique for abstracting and summarizing entire runtime heaps. We describe the abstract heap model and the associated algorithms for transforming a concrete heap dump into the corresponding abstract model as well as algorithms for merging, comparing, and computing changes between abstract models. The abstract model is designed to emphasize high-level concepts about heap-based data structures, such as shape and size, as well as relationships between heap structures, such as sharing and connectivity. We demonstrate the utility and computational tractability of the abstract heap model by building a memory profiler. We use this tool to identify, pinpoint, and correct sources of memory bloat for programs from DaCapo.", "keywords": "heap structure;runtime analysis;memory profiling;program understanding", "title": "Abstracting Runtime Heaps for Program Understanding"} {"abstract": "A graph is called-critical if the removal of any vertex from the graph decreases the domination number, while a graph with no isolated vertex is ?t ? t-critical if the removal of any vertex that is not adjacent to a vertex of degree 1 decreases the total domination number. A ?t ? t-critical graph that has total domination numberk k , is called k -?t ? t-critical. In this paper, we introduce a class of k -?t ? t-critical graphs of high connectivity for each integer k?3 k ? 3 . In particular, we provide a partial answer to the question Which graphs are-critical and ?t ? t-critical or one but not the other? posed in a recent work [W. Goddard, T.W. Haynes, M.A. Henning, L.C. van der Merwe, The diameter of total domination vertex critical graphs, Discrete Math. 286 (2004) 255261].", "keywords": "total domination;vertex critical;connectivity;diameter", "title": "On total domination vertex critical graphs of high connectivity"} {"abstract": "This paper presents efficient techniques for the qualitative and quantitative analysis of biochemical networks, which are modeled by means of qualitative and stochastic Petri nets, respectively. The analysis includes standard Petri net properties as well as model checking of the Computation Tree Logic and the Continuous Stochastic Logic. Efficiency is achieved by using Interval decision diagrams to alleviate the well-known problem of state space explosion, and by applying operations exploiting the Petri structure and the principle of locality. All presented techniques are implemented in our tool IDD-MC which is available on our website. ", "keywords": "biochemical networks;petri nets;interval decision diagrams;ct;csl;model checking", "title": "IDD-based model validation of biochemical networks"} {"abstract": "We discuss in this paper the form of the solutions of the following recursive sequences x(n+1) = x(n-3)x(n-4)/x(n)(+/-1 +/- x(n-3)x(n-4)), n = 0, 1, ..., where the initial conditions are arbitrary real numbers. Moreover, we study the dynamics and behavior of the solutions.", "keywords": "difference equations;recursive sequences;stability;periodic solution", "title": "The Form of The Solution and Dynamics of a Rational Recursive Sequence"} {"abstract": "Data warehousing is an approach to data integration wherein integrated information is stored in a data warehouse for direct querying and analysis. To provide fast access, a data warehouse stores materialized views of the sources of its data. As a result, a data warehouse needs to be maintained to keep its contents consistent with the contents of its data sources. Incremental maintenance is generally regarded as a more efficient way to maintain materialized views in a data warehouse. In this paper a strategy for the maintenance of data warehouse is presented. It has the following characteristics: it is self-maintainable (weak), incremental, non-blocking (the analysts transactions and the maintenance transaction are executed concurrently) and is performed in real time. The proposed algorithm is implemented for view definition SPJ (Select Project Join) queries and it calculates the aggregate functions: sum, avg, count, min and max. Aggregate functions are calculated like algebraic functions (the new result of the function can be computed using some small, constant size storage that accompanies the existing value of the aggregate). We have named this improved algorithm ?VNLTR (unlimited ?V (versions), NL (non-blocking), TR (in real time)).", "keywords": "self-maintenable;data warehouse", "title": "real time self-maintenable data warehouse"} {"abstract": "We construct a family of error-correcting pooling designs with the incidence matrix of two types of subspaces of symplectic spaces over finite fields. We show that the new construction gives better ratio of efficiency compared with previously known three constructions associated with subsets of a set, its analogue over a vector space, and the dual spaces of a symplectic space.", "keywords": "pooling designs;d-disjunct matrix;symplectic space;totally isotropic subspaces;non-isotropic subspaces", "title": "Constructing error-correcting pooling designs with symplectic space"} {"abstract": "The discrete wavelet transform (DWT) is used in several image and video compression standards, in particular JPEG2000. A 2D DWT consists of horizontal filtering along the rows followed by vertical filtering along the columns. It is well-known that a straightforward implementation of vertical filtering (assuming a row-major layout) induces many cache misses, due to lack of spatial locality. This can be avoided by interchanging the loops. This paper shows, however, that the resulting implementation suffers significantly from 64K aliasing, which occurs in the Pentium 4 when two data blocks are accessed that are a multiple of 64K apart, and we propose two techniques to avoid it. In addition, if the filter length is longer than four, the number of ways of the L1 data cache of the Pentium 4 is insufficient to avoid cache conflict misses. Consequently, we propose two methods for reducing conflict misses. Although experimental results have been collected on the Pentium 4, the techniques are general and can be applied to other processors with different cache organizations as well. The proposed techniques improve the performance of vertical filtering compared to already optimized baseline implementations by a factor of 3.11 for the (5,3) lifting scheme, 3.11 for Daubechies' transform of four coefficients, and by a factor of 1.99 for the Cohen, Daubechies, and Feauveau 9/7 transform.", "keywords": "cache;discrete wavelet transform;memory hierarchy;performance", "title": "improving the memory behavior of vertical filtering in the discrete wavelet transform"} {"abstract": "March tests are widely used in the process of RAM testing. This family of tests is very efficient in the case of simple faults such as stuck-at or transition faults. In the case of a complex fault model-such as pattern sensitive faults-their efficiency is not sufficient. Therefore we have to use other techniques to increase fault coverage for complex faults. Multibackground memory testing is one of such techniques. In this case a selected March test is run many times. Each time it is run with new initial conditions. One of the conditions which we can change is the initial memory background. In this paper we compare the efficiency of multibackground tests based on four different algorithms of background generation.", "keywords": "ram testing;pattern sensitive faults;march tests;multibackground testing", "title": "ANALYSIS OF MULTIBACKGROUND MEMORY TESTING TECHNIQUES"} {"abstract": "Email spam is a much studied topic, but even though current email spam detecting software has been gaining a competitive edge against text based email spam, new advances in spam generation have posed a new challenge: image-based spam. Image based spam is email which includes embedded images containing the spam messages, but in binary format. In this paper, we study the characteristics of image spam to propose two solutions for detecting image-based spam, while drawing a comparison with the existing techniques. The first solution, which uses the visual features for classification, offers an accuracy of about 98%, i.e. an improvement of at least 6% compared to existing solutions. SVMs (Support Vector Machines) are used to train classifiers using judiciously decided color, texture and shape features. The second solution offers a novel approach for near duplication detection in images. It involves clustering of image GMMs (Gaussian Mixture Models) based on the Agglomerative Information Bottleneck (AIB) principle, using Jensen-Shannon divergence (JS) as the distance measure.", "keywords": "machine learning;email spam;image analysis", "title": "detecting image spam using visual features and near duplicate detection"} {"abstract": "This manuscript presents an improved region-based active contour model for noisy image segmentation. We define a local energy according to intensity information within the neighborhood of each point in image domain. By introducing a kernel function, our method employs intensity information in local region to guide the motion of active contour. Experiments on synthetic and real world images show that our model is robust to image noise while preserving the segmentation efficacy.", "keywords": "noisy image segmentation;robust chanvese model;level set method;variational method", "title": "Exploiting local intensity information in ChanVese model for noisy image segmentation"} {"abstract": "This paper presents an iterative spectral framework for pairwise clustering and perceptual grouping. Our model is expressed in terms of two sets of parameters. Firstly, there are cluster memberships which represent the affinity of objects to clusters. Secondly, there is a matrix of link weights for pairs of tokens. We adopt a model in which these two sets of variables are governed by a Bernoulli model. We show how the likelihood function resulting from this model may be maximised with respect to both the elements of link-weight matrix and the cluster membership variables. We establish the link between the maximisation of the log-likelihood function and the eigenvectors of the link-weight matrix. This leads us to an algorithm in which we iteratively update the link-weight matrix by repeatedly refining its modal structure. Each iteration of the algorithm is a three-step process. First, we compute a link-weight matrix for each cluster by taking the outer-product of the vectors of current cluster-membership indicators for that cluster. Second, we extract the leading eigenvector from each modal link-weight matrix. Third, we compute a revised link weight matrix by taking the sum of the outer products of the leading eigenvectors of the modal link-weight matrices.", "keywords": "graph-spectral methods;maximum likelihood;perceptual grouping;motion segmentation", "title": "A probabilistic spectral framework for grouping and segmentation"} {"abstract": "Security principles are often neglected by software architects, due to the lack of precise definitions. This results in potentially high-risk threats to systems. Our own previous work tackled this by introducing formal foundations for the least privilege (LP) principle in software architectures and providing a technique to identify violations to this principle. This work shows that this technique can scale by composing the results obtained from the analysis of the sub-parts of a larger system. The technique decomposes the system into independently described subsystems and a description listing the interactions between these subsystems. These descriptions are thence analyzed to obtain LP violations and subsequently composed to obtain the violations of the overall system.", "keywords": "least privilege;software architecture;security analysis", "title": "composition of least privilege analysis results in software architectures (position paper)"} {"abstract": "This paper describes a software-based system for offline tracking of eye and head movements using stored video images, designed for use in the study of air-traffic displays. These displays are typically dense with information; to address the research questions, we wish to be able to localize gaze within a single word within a line of text (a few minutes of arc), while at the same time allowing some freedom of movement to the subject. Accurate gaze tracking in the presence of head movements requires high precision head tracking, and this was accomplished by registration of images from a forward-looking scene camera with a narrow field of view.", "keywords": "head and eye tracking;scan-path analysis;air traffic displays;image registration", "title": "a software-based eye tracking system for the study of air-traffic displays"} {"abstract": "In this article we investigate the parallel machine scheduling problem with job release dates, focusing on the case that machines are dissimilar with each other. The goal of scheduling is to find an assignment and sequence for a set of jobs so that the total weighted completion time is minimised. This type of production environment is frequently encountered in process industry, such as chemical and steel industries, where the scheduling of jobs with different purposes is an important goal. This article formulates the problem as an integer linear programming model. Because of the dissimilarity of machines, the ordinary job-based decomposition method is no longer applicable, a novel machine-based Lagrangian relaxation algorithm is therefore proposed. Penalty terms associated with violations of coupling constraints are introduced to the objective function by Lagrangian multipliers, which are updated using subgradient optimisation method. For each machine-level subproblem after decomposition, a forward dynamic programming algorithm is designed together with the weighted shortest processing time rule to provide an optimal solution. A heuristics is developed to obtain a feasible schedule from the solution of subproblems to provide an upper bound. Numerical results show that the new approach is computationally effective to handle the addressed problem and provide high quality schedules.", "keywords": "lagrangian relaxation;dissimilar parallel machine;release dates;dynamic programming;machine-based decomposition;heuristics", "title": "A new Lagrangian Relaxation Algorithm for scheduling dissimilar parallel machines with release dates"} {"abstract": "Use of Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) is evaluated in the analysis of complex biocide environmental sample mixtures by liquid chromatography with diode array detection (LC-DAD). Chromatographic coelution problems caused either because of the presence of unknown matrix interferences or because of using short chromatographic columns to reduce analysis times are investigated. Under such circumstances, lack of chromatographic resolution and lack of spectral selectivity of UV-VIS diode array detection is compensated by chemometric resolution using Multivariate Curve Resolution. Resolution of complex environmental mixtures and quantitative calibration curves for two types of chromatographic columns (25 and 7.5 cm) with different resolution and analysis times are shown. The limits of the proposed approach are investigated in the analysis of complex environmental samples with short LC columns and UV-VIS diode array detection. ", "keywords": "multivariate curve resolution;mcr-als;coelutions;short columns;lc-dad;biocides", "title": "Fast chromatography of complex biocide mixtures using diode array detection and multivariate curve resolution"} {"abstract": "Current crowding of IGBT and power diode in a chip or among chips is a barrier to the realization of highly-reliable power module. The author developed and demonstrated 16-channel flat sensitivity sensor array for IGBT current distribution measurement. The sensor array consists of tiny-scale film sensors with analog amps and shield case against noise.", "keywords": "igbt;current distribution;current crowding;film sensor;reliability analysis;magnetic flux;digital calibration;flat sensitivity", "title": "16-Channel micro magnetic flux sensor array for IGBT current distribution measurement"} {"abstract": "Scalability of cache coherence protocol is a key component in future shared-memory multi-core or multi-processor systems. The state space explosion is the first hurdle while applying model-checking to scalable protocols. In order to validate parameterized cache coherence protocols effectively, we present a new method of reducing the state space of parameterized systems, two-dimensional abstraction (TDA). Drawing inspiration from the design principle of parameterized systems, an abstract model of an unbounded system is constructed out of finite states. The mathematical principles underlying TDA is presented. Theoretical reasoning demonstrates that TDA is correct and sound. An example of parameterized cache coherence protocol based on MESI illustrates how to produce a much smaller abstract model by TDA. We also demonstrate the power of our method by applying it to various well-known classes of protocols. During the development of TH-1A supercomputer system, TDA was used to verify the coherence protocol in FT-1000 CPU and showed the potential advantages in reducing the verification complexity.", "keywords": "parameterized cache coherence protocol;true concurrency;model checking;two-dimensional abstraction", "title": "State space reduction in modeling checking parameterized cache coherence protocol by two-dimensional abstraction"} {"abstract": "Consider a line . Conventional line drawing algorithms sample (x,f(x)) on the line, where x must be an integer, and then map (x,f(x)) to the frame buffer according to the defined filter and f(x) . In this paper, we propose to simulate a sampled point (x,f(x)) by the four pixels around it where x and f(x) are not necessary to be integers. Based on the proposed low-pass filtering, we show that the effect of sampling at infinite number of points along a line segment can be achieved since the closed form of the intensities assigned to pixels exists. Furthermore, we show the coherence properties that can reduce the cost for computing these intensities.", "keywords": "computer graphics;line drawing algorithm;antialiasing", "title": "A new antialiased line drawing algorithm"} {"abstract": "Sequence alignment is a fundamental task for computational genomics research. We develop G-Aligner, which adopts the GPU as a hardware accelerator to speed up the sequence alignment process. A leading CPU-based alignment tool is based on the Bi-BWT index; however, a direct implementation of this algorithm on the GPU cannot fully utilize the hardware power due to its irregular algorithmic structure. To better utilize the GPU hardware resource, we propose a filtering-verification algorithm employing both the Bi-BWT search and direct matching. We further improve this algorithm on the GPU through various optimizations, e.g., the split of a large kernel, the warp based implementation to avoid user-level synchronization. As a result, G-Aligner outperforms another state-of-the-art GPU-accelerated alignment tools SOAP3 by 1.83.5 times for in-memory sequence alignment.", "keywords": "sequence alignment;gpgpu;parallel systems", "title": "High-performance short sequence alignment with GPU acceleration"} {"abstract": "The past decade has witnessed an unprecedented growth in user interface and human-computer interaction (HCI) technologies and methods. The synergy of technological and methodological progress on the one hand, and changing user expectations on the other, are contributing to a redefinition of the requirements for effective and desirable human-computer interaction. A key component of these emerging requirements, and of effective HCI in general, is the ability of these emerging systems to address user affect. The objective of this special issue is to provide an introduction to the emerging research area of affective HCI, some of the available methods and techniques, and representative systems and applications. ", "keywords": "affective hci;affective computing;affect recognition;affect expression;affective user modeling", "title": "To feel or not to feel: The role of affect in human-computer interaction"} {"abstract": "The dynamics of a class of generalized neural networks with time-varying delays are analyzed. Without constructing a Lyapunov function, general sufficient conditions for the existence, uniqueness and exponential stability of an equilibrium of the neural networks are obtained by the nonlinear Lipschitz measure approach. The new criteria are mild, independent of the delays and do not require the boundedness, differentiability or monotonicity assumption of the activation functions. Moreover, the proposed results extend and improve existing ones.", "keywords": "neural networks;time-varying delay;exponential stability;exponential decay;nonlinear lipschitz measure", "title": "Exponential stability of a class of generalized neural networks with time-varying delays"} {"abstract": "The Video Game Sexism Scale was created to assess attitudes toward female gamers. Conformity to some masculine norms predicted video game sexism. Social dominance orientation predicted video game sexism.", "keywords": "video games;sex role stereotypes;gender roles;masculinity;sexual harassment;social identity model of deindividuation effects", "title": "Sexism in online video games: The role of conformity to masculine norms and social dominance orientation"} {"abstract": "We define a combinatorial checkerboard to be a function f : {1, . . . , m} (d) -> {1,-1} of the form for some functions f (i) : {1, . . . , m} -> {1,-1}. This is a variant of combinatorial rectangles, which can be defined in the same way but using {0, 1} instead of {1,-1}. We consider the problem of constructing explicit pseudorandom generators for combinatorial checkerboards. This is a generalization of small-bias generators, which correspond to the case m = 2. We construct a pseudorandom generator that -fools all combinatorial checkerboards with seed length . Previous work by Impagliazzo, Nisan, and Wigderson implies a pseudorandom generator with seed length . Our seed length is better except when 1/epsilon >= d(omega(log d)).", "keywords": "pseudorandom generators;combinatorial checkerboards;explicit constructions;derandomization", "title": "Pseudorandom generators for combinatorial checkerboards"} {"abstract": "Everyday family life involves a myriad of mundane activities that need to be planned and coordinated. We describe findings from studies of 44 different families' calendaring routines to understand how to best design technology to support them. We outline how a typology of calendars containing family activities is used by three different types of families-monocentric, pericentric, and polycentric-which vary in the level of family involvement in the calendaring process. We describe these family types, the content of family calendars, the ways in which they are extended through annotations and augmentations, and the implications from these findings for design.", "keywords": "families;coordination;awareness;calendars", "title": "The Calendar is Crucial: Coordination and Awareness through the Family Calendar"} {"abstract": "Hormesis is an adaptive response to low doses of otherwise harmful agents by triggering a cascade of stress-specific resistance pathways. Evidence from protozoa, nematodes, flies, rodents, and primates indicate that stress-induced tolerance modulates survival and longevity. Realit is that hormesis can prolong the healthy life span. Genetic background provides the potential for longevity duration induced by stress. Senesence, or aging, is generally thought to be due to a different impact of selection for alleles positive for reproduction during early life but harmful in later life, a process called antagonistic pleiotropy (multiple phenotypic changes by a single gene). After reproduction, life span is invisible to selection. I propose the revision that mutations selected for survival until reproduction in early life may also extend later life (protagonistic pleiotropy). The protagonist candidate genes for extended life span are hormetic response genes, which activate the protective effect in both early and later life. My revision of the earlier evolutionary theory implies that natural selection of genes critical for early survival (life span until reproduction) can also be beneficial for extended longevity in old age, tipping the evolutionary balance in favor of a latent inducible life span extension unless excess stressor challenge exceeds the protection capacity. Mimetic triggers of the stress response promise the option of tricking the induction of metabolic pathways that confer resistance to environmental challenges, increased healthy life span, rejuvenation, and disease intervention without the danger of overwhelmiong damage by the stressor. Public policy should anticipate an increase in healthy life span.", "keywords": "hormesis;evolution;longevity;rejuvenation", "title": "The Myth and Reality of Reversal of Aging by Hormesis"} {"abstract": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.", "keywords": "emotion recognition;affective computing;multimodal human-computer interaction;affect recognition", "title": "bimodal hci-related affect recognition"} {"abstract": "Better than second order accurate spacetime adaptive mesh refinement (AMR). Time accurate local time stepping (LTS). High order ADER-WENO finite volume scheme for non-conservative hyperbolic systems. Applications to the BaerNunziato model of compressible multiphase flows in 2D and 3D. Very sharp resolution of material interfaces.", "keywords": "adaptive mesh refinement ;time accurate local time stepping;high order ader approach;path-conservative weno finite volume schemes;compressible multi-phase flows;baernunziato model", "title": "High order spacetime adaptive ADER-WENO finite volume schemes for non-conservative hyperbolic systems"} {"abstract": "Simulated noisy data sets are used to compare the accuracy of four existing covariance estimation methodologies Among the discussed methodologies the NNVE algorithm provides the most accurate estimates of covariance. To further improve the accuracy of the covariance estimation, a new methodology based on a modification of the NNVE methodology is proposed. The proposed methodology is shown to exhibit improved performance in classification as well as anomaly detection applications.", "keywords": "prognostics;system health management;covariance estimation", "title": "Evaluating covariance in prognostic and system health management applications"} {"abstract": "Today's power grid is facing many challenges due to increasing load growth, aging of existing power infrastructures, high penetration of renewable, and lack of fast monitoring and control. Utilizing recent developments in Information and Communication Technologies (ICT) at the power-distribution level, various smart-grid applications can be realized to achieve reliable, efficient, and green power. Interoperable exchange of information is already standardized in the globally accepted smart-grid standard, IEC 61850, over the local area networks (LANs). Due to low installation cost, sufficient data rates, and ease of deployment, the industrial wireless LAN technologies are gaining interest among power utilities, especially for less critical smart distribution network applications. Extensive work is carried out to examine the wireless LAN (WLAN) technology within a power distribution substation. The first phase of the work is initiated with the radio noise interference measurements at 27.6- and 13.8-kV distribution substations, including circuit breaker switching operations. For a detailed investigation, the hardware prototypes of WLAN-enabled IEC 61850 devices are developed using industrial embedded systems, and the performance of smart distribution substation monitoring, control, and protection applications is analyzed for various scenarios using a round trip-time of IEC 61850 application messages. Finally, to examine the real-world field performance, the developed prototype devices are installed in the switchyard and control room of 27.6 power distribution substation, and testing results of various applications are discussed.", "keywords": "distribution substation automation;iec 61850;ieee 802.11;intelligent electronics devices ;smart grid", "title": "A Comprehensive Investigation of Wireless LAN for IEC 61850-Based Smart Distribution Substation Applications"} {"abstract": "In some industries, mass customization requires a supplier to provide an Original Equipment Manufacturer (OEM) with a wide range of variants of a given part. We consider an OEM-parts suppliers system for an automotive supply chain where parts are delivered to the assembly line several times a day in a just-in-time environment. Simulating varying assembly schedule and parts delivery schemes, we assess the effect of mass customization on the level of inventory the supplier needs for each variant in order to prevent stockouts. We find, among other things, that as the level of mass customization increases, there tends to be an increase in the level of inventory the supplier needs to maintain for each part variant in order to prevent stockouts. Theoretical support is provided for the phenomenon. The presented framework is also useful for evaluating the levels of mass customization that will enable the manufacturer meet customers requirements in a cost effective manner. Furthermore, the study confirms the superiority, in terms of inventory levels, of the minmax over the minsum optimization framework.", "keywords": "supply chain management;manufacturing;mass customization;just-in-time assembly systems;automotive", "title": "An assessment of the effect of mass customization on suppliers inventory levels in a JIT supply chain"} {"abstract": "Both knowledge and social commitments have received considerable attention in Multi-Agent Systems (MASs), specially for multi-agent communication. Plenty of work has been carried out to define their semantics. However, the relationship between social commitments and knowledge has not been investigated yet. In this paper, we aim to explore such a relationship from the semantics and model checking perspectives with respect to CTLK logic (an extension of CTL logic with modality for reasoning about knowledge) and CTLC logic (an extension of CTL with modalities for reasoning about commitments and their fulfillments). To analyze this logical relationship, we simply combine the two logics in one new logic named CTLKC. The purpose of such a combination is not to advocate a new logic, but only to express and figure out some reasoning postulates merging both knowledge and commitments as they are currently defined in the literature. By so doing, we identify some paradoxes in the new logic showing that simply combining current versions of commitment and knowledge logics results in a logical language that violates some fundamental intuitions. Consequently, we propose CTLKC+, a new logic that fixes the identified paradoxes and allows us to reason about social commitments and knowledge simultaneously in a consistent manner. Furthermore, we address the problem of model checking CTLKC+ by reducing it to the problem of model checking GCTL?, a generalized version of CTL? with action formulae. By doing so, we directly benefit from CWB-NC, the model checker of GCTL?. Using this reduction, we also prove that the computational complexity of model checking CTLKC+ is still PSPACE-complete for concurrent programs as the complexity of model checking CTLK and CTLC separately.", "keywords": "multi-agent systems;social commitments;agent communication;knowledge", "title": "On the interaction between knowledge and social commitments in multi-agent systems"} {"abstract": "This paper discusses a cubic B-spline interpolation problem with tangent directional constraint in R3 R 3 space. Given m points and their tangent directional vectors as well, the interpolation problem is to find a cubic B-spline curve which interpolates both the positions of the points and their tangent directional vectors. Given the knot vector of the resulting B-spline curve and parameter values to all of the data points, the corresponding control points can often be obtained by solving a system of linear equations. This paper presents a piecewise geometric interpolation method combining a unclamping technique with a knot extension technique, with which there is no need to solve a system of linear equations. It firstly uses geometric methods to construct a seed curve segment, which interpolates several data point pairs, i.e., positions and tangent directional vectors of the points. The seed segment is then extended to interpolate the remaining data point pairs one by one in a piecewise fashion. We show that a B-spline curve segment can always be extended to interpolate a new data point pair by adding two more control points. Methods for a curve segment extending to interpolate one more data point pair by adding one more control point are also provided, which are utilized to construct an interpolation B-spline curve with as small a number of control points as possible. Numerical examples show the effectiveness and the efficiency of the new method.", "keywords": "geometric interpolation;tangent directional constraint;b b-spline curves;knot extension;unclamping", "title": "Geometric point interpolation method in R3 R 3 space with tangent directional constraint"} {"abstract": "We present an interactive software package for implementing the supervised classification task during electromyographic (EMG) signal decomposition process using a fuzzy k-NN classifier and utilizing the MATLAB high-level programming language and its interactive environment. The method employs an assertion-based classification that takes into account a combination of motor unit potential (MUP) shapes and two modes of use of motor unit firing pattern information: the passive and the active modes. The developed package consists of several graphical user interfaces used to detect individual MUP waveforms from a raw EMG signal, extract relevant features, and classify the MUPs into motor unit potential trains (MUPTs) using assertion-based classifiers.", "keywords": "assertion-based classifiers;computer interaction;features extraction;fuzzy k-nn;motor unit potential classification;user interfaces", "title": "A software package for interactive motor unit potential classification using fuzzy k-NN classifier"} {"abstract": "The evaluation and selection of projects before investment decision is customarily done using, technical and information. In this paper, proposed a new methodology to provide a simple approach to assess alternative projects and help the decision-maker to select the best one for National Iranian Oil Company by using six criteria of comparing investment alternatives as criteria in an AHP and fuzzy TOPSIS techniques. The AHP is used to analyze the structure of the project selection problem and to determine weights of the criteria, and fuzzy TOPSIS method is used to obtain final ranking. This application is conducted to illustrate the utilization of the model for the project selection problems. Additionally, in the application, it is shown that calculation of the criteria weights is important in fuzzy TOPSIS method and they could change the ranking. The decision-maker can use these different weight combinations in the decision-making process according to priority.", "keywords": "project selection;ahp;fuzzy topsis;decision-maker;criteria", "title": "Project selection for oil-fields development by using the AHP and fuzzy TOPSIS methods"} {"abstract": "Given a positive even integer n, it is found that the weight distribution of any n-variable symmetric Boolean function with maximum algebraic immunity ( AI) n/2 is determined by the binary expansion of n. Based on the foregoing, all n-variable symmetric Boolean functions with maximum AI are constructed. The amount is (2wt(n) + 1)2([log2n]).", "keywords": "algebraic attack;algebraic immunity ;symmetric boolean function", "title": "On 2k-Variable Symmetric Boolean Functions With Maximum Algebraic Immunity k"} {"abstract": "Ring opened structures of C60 and C70 are shown to be stabilized by complexation with transition metal fragments of the form CnHnM, where n = 3 to 6 and M = Cr, Mn, Fe, Co, and Rh. The ring opening of C60 and C70 is compared with the reverse process of the well-known catalytic conversion of acetylene into benzene. Calculations at the semi-empirical PM3(tm) level show that the 6-membered ring in C60 and C70 can be opened up in different ways through complexation with transition metal fragment. The mode of ring opening depends on the number of external 5- and 6-membered rings around the 6-membered ring being cleaved. The structures and energetics of the various ring-opened structures are discussed. 2001 by Elsevier Science Inc.", "keywords": "cage-opened fullerenes;c60;c70;metal complexes;pm3", "title": "A theoretical study of transition metal complexes of C60 and C70 and their ring-opened alternatives"} {"abstract": "The kinematic dynamo approximation describes the generation of magnetic field in a prescribed flow of electrically-conducting liquid. One of its main uses is as a proof-of-concept tool to test hypotheses about self-exciting dynamo action. Indeed, it provided the very first quantitative evidence for the possibility of the geodynamo. Despite its utility, due to the requirement of resolving fine structures, historically, numerical work has proven difficult and reported solutions were often plagued by poor convergence. In this paper, we demonstrate the numerical superiority of a Galerkin scheme in solving the kinematic dynamo eigenvalue problem in a full sphere. After adopting a poloidaltoroidal decomposition and expanding in spherical harmonics, we express the radial dependence in terms of a basis of exponentially convergent orthogonal polynomials. Each basis function is constructed from a terse sum of one-sided Jacobi polynomials that not only satisfies the boundary conditions of matching to an electrically insulating exterior, but is everywhere infinitely differentiable, including at the origin. This Galerkin method exhibits more rapid convergence, for a given problem size, than any other scheme hitherto reported, as demonstrated by a benchmark of the magnetic diffusion problem and by comparison to numerous kinematic dynamos from the literature. In the axisymmetric flows we consider in this paper, at a magnetic Reynolds number of O(100), a convergence of 9 significant figures in the most unstable eigenvalue requires only 40 radial basis functions; alternatively, 4 significant figures requires 20 radial functions. The terse radial discretization becomes particularly advantageous when considering flows whose associated numerical solution requires a large number of coupled spherical harmonics. We exploit this new method to confirm the tentatively proposed positive growth rate of the planar flow of Bachtiar et al. [4], thereby verifying a counter-example to the Zeldovich anti-dynamo theorem in a spherical geometry.", "keywords": "kinematic dynamo;zeldovich theorem;galerkin method;jacobi polynomial;basis function;eigenvalue;convergence", "title": "An optimal Galerkin scheme to solve the kinematic dynamo eigenvalue problem in a full sphere"} {"abstract": "We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt algorithms designed for one problem to the other. For both problems, we give new algorithms and explain their potential advantages over existing methods. These algorithms are iterative and can be divided into two types based on whether the parameters are updated sequentially (one at a time) or in parallel (all at once). We also describe a parameterized family of algorithms that includes both a sequential- and a parallel-update algorithm as special cases, thus showing how the sequential and parallel approaches can themselves be unified. For all of the algorithms, we give convergence proofs using a general formalization of the auxiliary-function proof technique. As one of our sequential-update algorithms is equivalent to AdaBoost, this provides the first general proof of convergence for AdaBoost. We show that all of our algorithms generalize easily to the multiclass case, and we contrast the new algorithms with the iterative scaling algorithm. We conclude with a few experimental results with synthetic data that highlight the behavior of the old and newly proposed algorithms in different settings.", "keywords": "logistic regression;maximum-entropy methods;boosting;adaboost;bregman distances;convex optimization;iterative scaling;information geometry", "title": "Logistic regression, AdaBoost and Bregman distances"} {"abstract": "Scatter search is an evolutionary method that has been successfully applied to hard optimization problems. The fundamental concepts and principles of the method were first proposed in the 1970s, based on formulations dating back to the 1960s for combining decision rules and problem constraints. In contrast to other evolutionary methods like genetic algorithms, scatter search is founded on the premise that systematic designs and methods for creating new solutions afford significant benefits beyond those derived from recourse to randomization. It uses strategies for search diversification and intensification that have proved effective in a variety of optimization problems. This paper provides the main principles and ideas of scatter search and its generalized form path relinking. We first describe a basic design to give the reader the tools to create relatively simple implementations. More advanced designs derive from the fact that scatter search and path relinking are also intimately related to the tabu search (TS) metaheuristic, and gain additional advantage by making use of TS adaptive memory and associated memory-exploiting mechanisms capable of being tailored to particular contexts. These and other advanced processes described in the paper facilitate the creation of sophisticated implementations for hard problems that often arise in practical settings. Due to their flexibility and proven effectiveness, scatter search and path relinking can be successfully adapted to tackle optimization problems spanning a wide range of applications and a diverse collection of structures, as shown in the papers of this volume.", "keywords": "metaheuristics;evolutionary computations;search theory;path relinking", "title": "Principles of scatter search"} {"abstract": "A multipartite or c-partite tournament is an orientation of a complete c-partite graph. Lu and Guo (submitted for publication)[3] recently introduced strong quasi-Hamiltonian-connectivity of a multipartite tournament D as follows: For any two distinct vertices x and y of D , there is a path with at least one vertex from each partite set of D from x to y and from y to x . We obtain the definition for weak quasi-Hamiltonian-connectivity, where only one of those paths, and weak quasi-Hamiltonian-set-connectivity, where only one such path between every two distinct partite sets has to exist, in a natural way. In this paper, we characterize weakly quasi-Hamiltonian-set-connected multipartite tournaments which extends a result of Thomassen (1980)[6].", "keywords": "multipartite tournament;quasi-hamiltonian-connectivity;weak quasi-hamiltonian-set-connectivity", "title": "Weakly quasi-Hamiltonian-set-connected multipartite tournaments"} {"abstract": "This paper presents the findings of a knowledge audit conducted to determine the knowledge requirements of a large service-based enterprise in South Africa. The objective of the knowledge audit was to identify and describe the current and future knowledge requirements of the enterprise. The results indicated that employees have some basic knowledge and information needs that must be satisfied before any further investigations take place. Once the fundamental building blocks of knowledge content are established, it is recommended that more sophisticated solutions can be developed. Broad recommendations for establishing a knowledge management strategy that will be a source of sustainable competitive advantage are proposed.", "keywords": "knowledge management;south africa;service industries", "title": "Analysing knowledge requirements: a case study"} {"abstract": "An original active resistor circuit will be presented. The main advantages of the new proposed implementations are the improved linearity, small area consumption and improved frequency response. An original technique for linearizing the I( V) characteristic of the active resistor will be proposed, based on the utilization of a new linear differential amplifier, and on a current-pass circuit. The linearization of the original differential structure is achieved by compensating the quadratic characteristic of the MOS transistor operating in the saturation region by an original square-root circuit. The errors introduced by the second-order effects will be strongly reduced, while the circuit frequency response of the circuit is very good as a result of operating all MOS transistors in the saturation region. In order to design a circuit having a negative equivalent resistance, an original method specific to the proposed implementation of the active resistor circuit will be presented. The circuit is implemented in 0.35 mu m CMOS technology, the SPICE simulation con. firming the theoretical estimated results and showing a linearity error under a percent for an extended input range (+/- 500mV) and a small value of the supply voltage (+/- 3V).", "keywords": "active resistor;linearity;negative equivalent resistance", "title": "NEGATIVE RESISTANCE ACTIVE RESISTOR WITH IMPROVED LINEARITY AND FREQUENCY RESPONSE"} {"abstract": "The main purpose of this paper is to examine some (potential) applications of quantum computation in AI and to review the interplay between quantum theory and AI. For the readers who are not familiar with quantum computation, a brief introduction to it is provided, and a famous but simple quantum algorithm is introduced so that they can appreciate the power of quantum computation. Also, a (quite personal) survey of quantum computation is presented in order to give the readers a (unbalanced) panorama of the field. The author hopes that this paper will be a useful map for AI researchers who are going to explore further and deeper connections between AI and quantum computation as well as quantum theory although some parts of the map are very rough and other parts are empty, and waiting for the readers to fill in. ", "keywords": "quantum computation;quantum theory;search;learning;discrimination and recognition;bayesian network;semantic analysis;communication", "title": "Quantum computation, quantum theory and AI"} {"abstract": "The face is an important source of information in multimodal communication. Facial expressions are generated by contractions of facial muscles, which lead to subtle changes in the area of the eyelids, eye brows, nose, lips and skin texture, often revealed by wrinkles and bulges. To measure these subtle changes, Ekman et al.[5] developed the Facial Action Coding System (FACS). FACS is a human-observer-based system designed to detect subtle changes in facial features, and describes facial expressions by action units (AUs). We present a technique to automatically recognize lower facial Action Units, independently from one another. Even though we do not explicitly take into account AU combinations, thereby making the classification process harder, an average F 1 score of 94.83% is achieved.", "keywords": "facial action units;svm;ovl;adaboost", "title": "automatic recognition of lower facial action units"} {"abstract": "Wire bond programming (WBP) consists of information required to drive a wire bond machines movement during the wire bonding process. Wire bond programs consist of three key components: material handling, bonding parameter, and bonding path instructions. Of the three components, the bonding path component requires effort and time the most to prepare, as the preparation of bond path is currently being carried out manually. The manual process is tedious and error prone. In comparison to a manual process, offline programming (OLP) of bonding path creation provides a much more reliable and a less tedious method as it is error proof. OLP can be categorized into two versions, mainly vendor specific OLP and direct integration offline programming (Di-OLP), which is presented in this paper. Vendor specific OLP utilizes bonding diagrams created by a computer-aided design program to generate wire bonding paths. Di-OLP on the other hand utilizes the numeric coordinate data extracted from the bonding diagram creation software to generate the bonding path component of the wire bond program. Di-OLP is a more flexible method as it has the potential to be adapted to different machine platforms. This paper explains the challenges in the implementation of Di-OLP. The effectiveness and efficiency of the program created by Di-OLP are evaluated as compared to a manual programming method. Final results indicate that the offline programming is more efficient as it greatly reduces the time required to create the bonding paths for wire bond programs as compared to the manual methodology.", "keywords": "wire bonding;offline programming;computer-aided design;direct integration offline programming;bondlist", "title": "Development, implementation, and analysis of direct integration offline programming method"} {"abstract": "Characteristics of wireless sensor networks, specifically dense deployment, limited processing power, and limited power supply ,provide unique design challenges at the transport layer. Message transmission between sensor nodes over a wireless medium is especially expensive. Care must be taken to design an efficient transport layer protocol that combines reliable message delivery and congestion control with minimal overhead and retransmission. Sensor networks are created using low cost, low power nodes. Wireless sensors are assumed to have a finite lifetime; care must be taken to design and implement transport layer algorithms that allow maximum network lifetime. In this paper we present current and future challenges in the design of transport layers for sensor networks. Current transport layer protocols are compared based on how they implement reliable message delivery, congestion control, and energy efficiency.", "keywords": "wireless;networking;wireless sensor network;wsn;transport layer;layer 4;end-to-end reliability", "title": "Transport protocols for wireless sensor networks: State-of-the-art and future directions"} {"abstract": "This article proposes a sliding-mode-based scheme for optimal deceleration in an automotive braking maneuvre. The scheme is model-based and seeks to maintain the longitudinal slip value associated with the tyre road contact patch at an optimum value - the point at which the friction coefficient-slip curve reaches a maximum. The scheme assumes only wheel angular velocity is measured, and uses a sliding mode observer to reconstruct the states and a parameter relating to road conditions for use in the controller. The sliding mode controller then seeks to maintain the vehicle at this optimal slip value through an appropriate choice of sliding surface.", "keywords": "sliding modes;observers;nonlinear systems;friction estimation", "title": "Optimal braking and estimation of tyre friction in automotive vehicles using sliding modes"} {"abstract": "Numerical possibility theory, belief functions have been suggested as useful tools to represent imprecise, vague or incomplete information. They are particularly appropriate in uncertainty analysis where information is typically tainted with imprecision or incompleteness. Based on their experience or their knowledge about a random phenomenon, experts can sometimes provide a class of distributions without being able to precisely specify the parameters of a probability model. Frequentists use two-dimensional Monte-Carlo simulation to account for imprecision associated with the parameters of probability models. They hence hope to discover how variability and imprecision interact. This paper presents the limitations and disadvantages of this approach and propose a fuzzy random variable approach to treat this kind of knowledge. ", "keywords": "imprecise probabilities;possibility;belief functions;probability-boxes;monte-carlo 2d;fuzzy random variable", "title": "Representing parametric probabilistic models tainted with imprecision"} {"abstract": "This article presents the design and preparation using hypermedia tools of an interactive CD-ROM for the active teaching and learning of diverse problem-solving strategies in Mathematics for secondary school students. The use of the CD-ROM allows the students to learn, interactively, the heuristic style of solving problems. A range of problems has been used, each of which requires different solving strategies. A complementary section for consulting the theoretical foundations for the process of solving problems and other related information is also included on the CD-ROM. This section provides both theoretical and curriculum support for teachers. ", "keywords": "computer mediated communication;interactive learning environments;multimedia/hypermedia systems;secondary education;teaching/learning strategies", "title": "Designing hypermedia tools for solving problems in mathematics"} {"abstract": "The implementation of new mobile communication technologies developed in the third generation partnership project (3GPP) will allow to access the Internet not only from a PC but also via mobile phones, palmtops and other devices. New applications will emerge, combining several basic services like voice telephony, e-mail, voice over IP, mobility or web-browsing, and thus wiping out the borders between the fixed telephone network, mobile radio and the Internet. Offering those value-added services will be the key factor for success of network and service providers in an increasingly competitive market. In 3GPP's service framework the use of the PARLAY APIs is proposed that allow application development by third parties in order to speed up service creation and deployment. 3GPP has also adopted SIP for session control of multimedia communications in an IP network. This paper proposes a mapping of SIP functionality to PARLAY services and describes a prototype implementation using the SIP Servlet API. Furthermore an architecture of a Service Platform is presented that offers a framework for the creation, execution and management of carrier grade multimedia services in heterogeneous networks.", "keywords": "carrier grade services;network-independent services;sir-parlay mapping;caller preferences;service platform", "title": "a service framework for carrier grade multimedia services using parplay apis over a sip system"} {"abstract": "In this paper, we point out the limitation of the paper entitled \"Solving Systems of Linear Equations with Relaxed Monte Carlo Method\" published in this journal (Tan in J. Supercomput. 22:113-123, 2002). We argue that the relaxed Monte Carlo method presented in Sect. 7 of the paper is only correct under the condition that the coefficient matrix A must be diagonal dominate. However, for nondiagonal dominate case; the corresponding Neumann series may diverge, which would lead to infinite loop when simulating the iterative Monte Carlo algorithm. In this paper, we first prove that only for the diagonal dominate matrix, the corresponding von Neumann series can converge, and the Monte Carlo algorithm can be relaxed. Therefore, it is not true for nondiagonal dominate matrix, no matter the relaxed parameter gamma is a single value or a set of values. We then present and analyze the numerical experiment results to verify our arguments.", "keywords": "monte carlo methods;relaxed monte carlo method;diagonal dominate matrix", "title": "Condition for relaxed Monte Carlo method of solving systems of linear equations"} {"abstract": "Computational fluid dynamics (CFD) has been used to investigate the flow of air through the human orotracheal system. Results from an idealised geometry, and from a patient-specific geometry created from MRI scans were compared. The results showed a significant difference in the flow structures between the two geometries. Inert particles with diameters in the range 19?m were tracked through the two geometries. Particle diameter has proved to be an important factor in defining the eventual destinations of inhaled particles. Results from our calculations match other experimental and computational results in the literature, and differences between the idealised and patient-specific geometries are less significant.", "keywords": "cfd;image based meshing;respiration", "title": "A computational fluid dynamics study of inspiratory flow in orotracheal geometries"} {"abstract": "An airport is a multi-stakeholders environment, with work processes and operations cutting across a number of organizations. Airport landside operations involve a variety of services and entities that interact and depend on each others. In this paper, we introduce the Landside Modelling and Analysis of Services (LAMAS) tool, which is a multi-agent system, to simulate, analyze and evaluate the interdependencies of services in airport operations. A genetic algorithm is used to distribute resources among the different entities in an airport such that the level of service is maintained. The problem is modelled as a multi-objective constrained resource allocation problem with the objective functions being the maximization of quality of service while reducing the total cost.", "keywords": "quality of service;genetic algorithm;multi-agent system;airport landside;work-processes", "title": "modelling and evolutionary multi-objective evaluation of interdependencies and work processes in airport operations"} {"abstract": "Variability mechanisms are systematically evaluated in the evolution of SPLs. FOP and AFM have shown better adherence to the Open-Closed Principle than CC. When crosscutting concerns are present, AFM are recommended over FOP. Refactoring at component level has important impact in AFM and FOP. CC compilation should be avoided when modular design is an important requirement.", "keywords": "software product lines;feature-oriented programming;aspect-oriented programming;aspectual feature modules;variability mechanisms", "title": "A quantitative and qualitative assessment of aspectual feature modules for evolving software product lines"} {"abstract": "Nowadays in modern medicine, computer modeling has already become one of key methods toward the discovery of new pharmaceuticals. And virtual screening is a necessary process for this discovery. In the procedure of virtual screening, shape matching is the first step to select ligands for binding protein. In the era of HTS (high throughput screening), a fast algorithm with good result is in demand. Many methods have been discovered to fulfill the requirement. Our method, called Circular Cone, by finding principal axis, gives another way toward this problem. We use modified PCA (principal component analysis) to get the principal axis, around which the rotation is like whirling a cone. By using this method, the speed of giving score to a pocket and a ligand is very fast, while the accuracy is ordinary. So, the good speed and the general accuracy of our method present a good choice for HTS.", "keywords": "shape matching;pocket;ligand;new pharmaceuticals;virtual screen;circular cone", "title": "Circular Cone: A novel approach for protein ligand shape matching using modified PCA"} {"abstract": "This article presents the research work that exploits using XML (Extensible Markup Language) to represent different types of information in mobile agent systems, including agent communication messages, mobile agent messages, and other system information. The goal of the research is to build a programmable information base in mobile agent systems through XML representations. The research not only studies using XML in binary agent system space such as representing agent communication messages and mobile agent messages, but also explores interpretive XML data processing to avoid the need of an interface layer between script mobile agents and system data represented in XML. These XML-based information representations have been implemented in Mobile-C, a FIPA (The Foundation for Intelligent Physical Agents) compliant mobile agent platform. Mobile-C uses FIPA ACL (Agent Communication Language) messages for both inter-agent communication and inter-platform migration. Using FIPA ACL messages for agent migration in FIPA compliant agent systems simplifies agent platform, reduces development effort, and easily achieves inter-platform migration through well-designed communication mechanisms provided in the system. The ability of interpretive XML data processing allows mobile agents in Mobile-C directly accessing XML data information without the need of an extra interface layer.", "keywords": "mobile agents;agent communication;mobility;xml", "title": "XML-based agent communication, migration and computation in mobile agent systems"} {"abstract": "With the appearance of digital libraries and information archive centers on the Internet, ", "keywords": "internet;digital libraries;copyright;watermarking;proxy server;content transformation", "title": "Automatic proxy-based watermarking for WWW"} {"abstract": "We introduce a notion of k-convexity and explore polygons in the plane that have this property. Polygons which are k-convex can be triangulated with fast yet simple algorithms. However, recognizing them in general is a 3SUM-hard problem. We give a characterization of 2-convex polygons, a particularly interesting class, and show how to recognize them in O ( n log n ) time. A description of their shape is given as well, which leads to Erd?sSzekeres type results regarding subconfigurations of their vertex sets. Finally, we introduce the concept of generalized geometric permutations, and show that their number can be exponential in the number of 2-convex objects considered.", "keywords": "convexity;visibility;transversal theory", "title": "On k-convex polygons"} {"abstract": "This paper presents the application of the Voronoi Fast Marching (VFM) method to path planning of mobile formation robots. The VFM method uses the propagation of a wave (Fast Marching) operating on the world model to determine a motion plan over a viscosity map (similar to the refraction index in optics) extracted from the updated map model. The computational efficiency of the method allows the planner to operate at high rate sensor frequencies. This method allows us to maintain good response time and smooth and safe planned trajectories. The navigation function can be classified as a type of potential field, but it has no local minima, it is complete (it finds the solution path if it exists) and it has a complexity of order n(O(n)) n ( O ( n ) ) , where n is the number of cells in the environment map. The results presented in this paper show how the proposed method behaves with mobile robot formations and generates trajectories of good quality without problems of local minima when the formation encounters non-convex obstacles.", "keywords": "robot formation motion planning;formation control;fast marching", "title": "Robot formation motion planning using Fast Marching"} {"abstract": "Over the last few years many articles have been published in an attempt to provide performance benchmarks for virtual screening tools. While this research has imparted useful insights, the myriad variables controlling said studies place significant limits on results interpretability. Here we investigate the effects of these variables, including analysis of calculation setup variation, the effect of target choice, active/decoy set selection (with particular emphasis on the effect of analogue bias) and enrichment data interpretation. In addition the optimization of the publicly available DUD benchmark sets through analogue bias removal is discussed, as is their augmentation through the addition of large diverse data sets collated using WOMBAT.", "keywords": "virtual screening;enrichment;validation;analogue bias;chemotypes;dud;wombat", "title": "Optimization of CAMD techniques 3. Virtual screening enrichment studies: a help or hindrance in tool selection"} {"abstract": "Based on the system adaptation framework which has been proposed in our previous work, this paper focuses on the input selection of this framework to identify crucial market influential factors. We first carry out an empirical research to preselect influential factors from economic and sentimental aspects. The causal relationship between each of them and the internal residue of the market is then tested. Lastly, a multicollinearity test is applied to those factors that show significant causality to the internal residue of the market to exclude the redundant indicators. As the causal relationship plays an essential role in this method, both linear time-varying and nonlinear causality tests are employed based on the predictive ability of our framework. This double selection method is applied to the US and China stock markets, and it is shown to be efficient in identifying market influential factors. We also find that these influential factors are market-dependent and frequency-dependent. Some well-tested factors in the developed market and literature may not work in the emerging market.", "keywords": "financial system modeling;system adaptation;market input selection;causality test", "title": "Identification of stock market forces in the system adaptation framework"} {"abstract": "We present a novel method for quadrangulating a given triangle mesh. After constructing an as smooth as possible symmetric cross field satisfying a sparse set of directional constraints (to capture the geometric structure of the surface), the mesh is cut open in order to enable a low distortion unfolding. Then a seamless globally smooth parametrization is computed whose iso-parameter lines follow the cross field directions. In contrast to previous methods, sparsely distributed directional constraints are sufficient to automatically determine the appropriate number, type and position of singularities in the quadrangulation. Both steps of the algorithm (cross field and parametrization) can be formulated as a mixed-integer problem which we solve very efficiently by an adaptive greedy solver. We show several complex examples where high quality quad meshes are generated in a fully automatic manner.", "keywords": "singularities;parametrization;mixed-integer;remeshing;direction field;quadrangulation", "title": "mixed-integer quadrangulation"} {"abstract": "We introduce the method of proving complexity dichotomy theorems by holographic reductions. Combined with interpolation, we present a unified strategy to prove #P-hardness. Specifically, we prove a complexity dichotomy theorem for a class of counting problems on 2-3 regular graphs expressible by Boolean signatures. For these problems, whenever a holographic reduction followed by interpolation fails to prove #P-hardness, we can show that the problem is solvable in polynomial time.", "keywords": "holographic reduction;polynomial interpolation;#p-hard;counting complexity", "title": "Holographic reduction, interpolation and hardness"} {"abstract": "Equipped with better sensing and learning capabilities, robots nowadays are meant to perform versatile tasks. To remove the load of detailed analysis and programming from the engineer, a concept has been proposed that the robot may learn how to execute the task from human demonstration by itself Following the idea, in this paper, we propose an approach for the robot to learn the intention of the demonstrator from the resultant trajectory during task execution. The proposed approach, identifies the portions of the trajectory that correspond to delicate and skillful maneuvering. Those portions, referred to as motion features, may implicate the intention of the demonstrator. As the trajectory may result from so many possible intentions, it poses a severe challenge on finding the correct one's. We first formulate the problem into a realizable mathematical form and then employ the method of dynamic programming for the search. Experiments based on the pouring and also fruit jam tasks are performed to demonstrate the proposed approach, in which the derived intention is used to execute the same task under different experimental settings.", "keywords": "intention learning;human demonstration;motion feature;robot imitation;skill transfer", "title": "Intention Learning From Human Demonstration"} {"abstract": "With the increase of internet protocol (IP) packets the performance of routers became an important issue in internet/working. In this paper we examine the matching algorithm in gigabit router which has input queue with virtual output queueing. Dynamic queue scheduling is also proposed to reduce the packet delay and packet loss probability. Port partitioning is employed to reduce the computational burden of the scheduler in a switch which matches the input and output ports for fast packet switching. Each port is divided into two groups such that the matching algorithm is implemented within each pair of groups in parallel. The matching is performed by exchanging the pair of groups at every time slot. Two algorithms, maximal weight matching by port partitioning (MPP) and modified maximal weight matching by port partitioning (MMPP) are presented. In dynamic queue scheduling, a popup decision rule for each delay critical packet is made to reduce both the delay of the delay critical packet and the loss probability of loss critical packet. Computational results show that MMPP has the lowest delay and requires the least buffer size. The throughput is illustrated to be linear to the packet arrival rate, which can be achieved under highly efficient matching algorithm. The dynamic queue scheduling is illustrated to be highly effective when the occupancy of the input buffer is relatively high. To cope with the increasing internet traffic, it is necessary to improve the performance of routers. To accelerate the switching from input ports to output in the router partitioning of ports and dynamic queueing are proposed. Input and output ports are partitioned into two groups A/B and a/b, respectively. The matching for the packet switching is performed between group pairs (A, a) and (B, b) in parallel at one time slot and (A, b) and (B, a) at the next time slot. Dynamic queueing is proposed at each input port to reduce the packet delay and packet loss probability by employing the popup decision rule and applying it to each delay critical packet. The partitioning of ports is illustrated to be highly effective in view of delay, required buffer size and throughput. The dynamic queueing also demonstrates good performance when the traffic volume is high.", "keywords": "ip-forwarding;scheduling;switch;dynamic-queueing", "title": "Port partitioning and dynamic queueing for IP forwarding"} {"abstract": "In a classroom, a teacher attempts to convey his or her knowledge to the students. and thus it is important for the teacher to obtain formative feedback about how well students are understanding the new material. By gaining insight into the students' understanding and possible misconceptions. the teacher will be able to adjust the teaching and to supply more useful learning materials as necessary. Therefore. the diagnosis of formative student evaluations is critical for teachers and learners, as is the diagnosis of patterns in the overall learning by a class in order to inform a teacher about the efficacy of his or her teaching. This paper investigates what might be called the \"class learning diagnosis problem\" by embedding important concepts in a test and analyzing the results with a hierarchical coding scheme. Based on previous research. the part-of and type-of relationships among concepts are used to construct a concept hierarchy that may then be coded hierarchically. All concepts embedded in the test items then can be formulated into concept matrices, and the answer sheets of the learners in a class are then analyzed to indicate particular types of concept errors. The trajectories of concept errors are studied to identify both individual misconceptions students might have as well as patterns of misunderstanding in the overall class. In particular, a clustering algorithm is employed to distinguish student groups who might share similar misconceptions. These approaches are implemented as an integrated module in a previously developed system and applied to two real classroom data sets, the results of which show the practicability of this proposed method. ", "keywords": "concept map;misconception;learning diagnosis;community;clustering", "title": "Learning and diagnosis of individual and class conceptual perspectives: an intelligent systems approach using clustering techniques"} {"abstract": "A modified artificial bee colony algorithm is proposed for the stage shop scheduling problem. The stage shop is a new extension for the mixed shop problem and as a result, for job shop and open shop problems. In employed bee phase of ABC, a potent neighborhood of the stage shop is used and a tabu search manner is substituted for greedy selection. In onlooker bee phase of ABC, particle swarm optimization idea is applied instead of completely random search. The proposed algorithm obtained new optimal solutions and upper bounds for benchmark problems.", "keywords": "scheduling;stage shop;artificial bee colony;cma-es;particle swarm optimization", "title": "A modified ABC algorithm for the stage shop scheduling problem"} {"abstract": "In the SmartFactory KL , the intelligent factory of the future, a consortium of companies and research facilities explores new, intelligent technologies. Being a development and demonstration center for industrial applications, the SmartFactory KL is arbitrarily modifiable and expandable (flexible), connects components from multiple manufacturers (networked), enables its components to perform context-related tasks autonomously (self-organizing), and emphasizes user-friendliness (user-oriented). In this paper, we present a prototypical system that enables commercial mobile phones to monitor, diagnose, and remotely control plant components via Bluetooth.", "keywords": "remote operation;mobile interaction;flexible automation;wireless system integration;agile control;smartphone", "title": "demonstrating remote operation of industrial devices using mobile phones"} {"abstract": "In this correspondence, modulation diversity (MD) for frequency-selective fading channels is proposed. The achievable performance with MD is analyzed and a simple design criterion for MD codes for Rayleigh-fading channels is deduced from an upper bound on the pairwise error probability (PEP) for single-symbol transmission. This design rule is similar to the well-known design rule for MD codes for flat fading and does not depend on the power-delay profile of the fading channel. Several examples for MD codes with prescribed properties are given and compared. Besides the computationally costly optimum receiver, efficient low-complexity linear equalization (LE) and decision-feedback equalization (DFE) schemes for MD codes are also introduced. Simulations for the widely accepted COST fading models show that performance gains of several decibels can be achieved by MD combined with LE or DFE at bit-error rates (BERs) of practical interest. In addition, MD also enables the suppression of cochannel interference.", "keywords": "code design;equalization;modulation diversity;performance bounds", "title": "Modulation diversity for frequency-selective fading channels"} {"abstract": "To enhance students' capabilities regarding team work in real software development projects, should be a major objective for any Computer Science department within a University. Starting from the authors' experience in coordinating student teams for the development of complex projects, this paper outlines a set of considerations in this regard.", "keywords": "software life cycle;team work;student project;software development", "title": "team work in software development student projects"} {"abstract": "Gas desorption from field emitter array (FEA) cathode and phosphor screen anode in a flat panel display during lifetime operation can affect cathode electron emission and degrade display performance and uniformity. We have measured the outgassing products from selected FEA-phosphor pairs in an ultrahigh vacuum system equipped with a calibrated quadrople residual gas analyzer. Different low voltage phosphors and blank anodes were studied. A Spindt-type FEA was used as the electron source. A unique carousel was used so the desorption from all these different anodes could be measured without intervening vacuum breaks. this allowed the desorption from the different anodes to be directly compared to each other. Quantitative outgassing rates are given and the implications of the results for the pumping of the flat panel and emission from the FEAs an discussed. ", "keywords": "field emission displays;field emitter array;phosphor", "title": "Gas desorption electron stimulated during operation of field emitter phosphor screen pairs"} {"abstract": "The Sznajd model for the opinion formation is generalized to small-world networks. This generalization destroyed the stalemate fixed point, Then a simple definition of leaders is included. No fixed points are observed. This model displays some interesting aspects in sociology. The model is investigated using time series analysis.", "keywords": "ising model;opinion formation models;small-world networks;leaders", "title": "Application of the sznajd sociophysics model to small-world networks"} {"abstract": "This paper presents a module system and a programming environment designed to support interactive program development in Scheme. The module system extends lexical scoping while maintainig its flavor and benefits and supports mutually recursive modules. The programming environment supports dynamic linking, separate compilation, production code compilation, and a window-based user interface with multiple read-eval-print contexts.", "keywords": "interaction;printing;program;product;systems;recursion;developer;code;dynamic;context;module;user interface;compilation;modular;support;separate compilation;windows;paper;read;programming environment;linking;scheme", "title": "interactive modular programming in scheme"} {"abstract": "In this paper, we investigate various ways of characterizing words, mainly over a binary alphabet, using information about the positions of occurrences of letters in words. We introduce two new measures associated with words, the position index and sum of position indices. We establish some characterizations, connections with Parikh matrices, and connections with power sums. One particular emphasis concerns the effect of morphisms and iterated morphisms on words. ", "keywords": "position of letter;subword;parikh matrix;power sum;iterated morphism;thue morphism;fibonacci morphism", "title": "Subword balance, position indices and power sums"} {"abstract": "This paper introduces a methodology to integrate and control effectively major plant processes with strong couplings between them. The proposed integration philosophy consists of causeeffect relationships and decides upon control setpoints for the individual processes by optimizing a global objective function which aims at improving process yield. A neuro-fuzzy model and a fuzzy objective function are employed to address the integration and control tasks. Such models and objective functions are defined and developed using experimental data or an operator's experience. The objective is to maximize productivity and at the same time, reduce defects in each of the subsequent operations. A textile plant is considered as a testbed and three major processes warping, slashing and weaving are employed to illustrate the feasibility of the approach. The supervisory level of the control architecture is intended to continuously improve the control setpoints depending upon feedback information from the weave room, slasher operator, and warping data.", "keywords": "polynomial fuzzy neural networks;fuzzy logic control;integration;causeeffect relation;genetic algorithms;hybrid genetic optimization", "title": "An intelligent approach to integration and control of textile processes"} {"abstract": "Maximum Likelihood (ML) estimation based Expectation Maximization (EM) [IEEE Trans Med Imag, MI-1 (2) (1982) 113] reconstruction algorithm has shown to provide good quality reconstruction for positron emission tomography (PET). Our previous work [IEEE Trans Med Imag, 7(4) (1988) 273; Proc IEEE EMBS Conf, 20(2/6) (1998) 759] introduced the multigrid (MG) and multiresolution (MR) concept for PET image reconstruction using EM. This work transforms the MGEM and MREM algorithm to a Wavelet based Multiresolution EM (WMREM) algorithm by extending the concept of switching resolutions in both image and data spaces. The MR data space is generated by performing a 2D-wavelet transform on the acquired tube data that is used to reconstruct images at different spatial resolutions. Wavelet transform is used for MR reconstruction as well as adapted in the criterion for switching resolution levels. The advantage of the wavelet transform is that it provides very good frequency and spatial (time) localization and allows the use of these coarse resolution data spaces in the EM estimation process. The MR algorithm recovers low-frequency components of the reconstructed image at coarser resolutions in fewer iterations, reducing the number of iterations required at finer resolution to recover high-frequency components. This paper also presents the design of customized biorthogonal wavelet filters using the lifting method that are used for data decomposition and image reconstruction and compares them to other commonly known wavelets.", "keywords": "multiresolution reconstruction;wavelets;expectation maximization;positron emission tomography;lifting scheme", "title": "Wavelet based multiresolution expectation maximization image reconstruction algorithm for positron emission tomography"} {"abstract": "A great deal of research in the area of agent-oriented software engineering (AOSE) focuses on proposing methodologies for agent systems, i.e., on identifying the guidelines to drive the various phases of agent-based software development and the abstractions to be exploited in these phases. However, very little attention has been paid so far to the engineering process subjacent to the development activity, disciplining the execution of the different phases involved in the software development. In this paper, we focus on process models for software development and put these in relation with current researches in AOSE. First, we introduce the key concepts and issues related to software processes and present the various software process models currently adopted in mainstream software engineering. Then, we survey the characteristics of a number of agent-oriented methodologies, as they pertain to software processes. In particular, for each methodology, we analyze which software process model it (often implicitly) underlies and which phases of the process are covered by it, thus enabling us to identify some key limitations of currently methodology-centered researches. On this basis, we eventually identify and analyze several open issues in the area of software process models for agent-based development, calling for further researches and experiences.", "keywords": "agent-based computing;software engineering;methodologies;process models;agent software development", "title": "Process models for agent-based development"} {"abstract": "For a decentralized computing system of many computing elements (whether geographically distributed mainframe computers, or miniature computing elements within a single board or even chip), a naturally decentralized model is prerequisite for the organization of computation. This will allow a large number of computer elements to cooperate in the execution of a program. Computational models used in parallel computers (data flow, control flow, and reduction) help to identify the attributes of such a decentralized and general-purpose model. This paper examines data flow, control flow, and reduction models and presents a classification for their underlying concepts. In addition, it describes a computational model called recursive control flow, which is a synthesis of these concepts and which directly supports data flow, control flow, and reduction computation.", "keywords": "help;organization;systems;recursion;computer modeling;computation;concept;reduction;data;flow control;flow;control flow;general;attributes;synthesis;parallel computation;paper;distributed;model;decentralization;classification", "title": "decentralized computation"} {"abstract": "Topology design of switched local area networks (SLAN) is classified as an NP-hard problem since a number of objectives, such as monetary cost, network delay, hop count between communicating pairs, and reliability need to be simultaneously optimized under a set of constraints. This paper presents a multiobjective heuristic based on a simulated annealing (SA) algorithm for topology design of SLAN. Fuzzy logic has been incorporated in the SA algorithm to handle the imprecise multiobjective nature of the SLAN topology design problem, since the logic provides a suitable mathematical framework to address the multiobjective aspects of the problem. To enhance the performance of the proposed fuzzy simulated annealing (FSA) algorithm, two variants of FSA are also proposed. These variants incorporate characteristics of tabu search (TS) and simulated evolution (SimE) algorithms. The three proposed fuzzy heuristics are mutually compared with each other. Furthermore, two fuzzy operators, namely, ordered weighted average (OWA) and unified AND-OR (UAO) are also applied in certain steps of these algorithms. Results show that in general, the variant which embeds characteristics of SimE and TS into the fuzzy SA algorithm exhibits more intelligent search of the solution subspace and was able to find better solutions than the other two variants of the fuzzy SA. Also, the OWA and UAO operators exhibited relatively similar performance.", "keywords": "network topology;fuzzy logic;distributed networks;simulated annealing;simulated evolution", "title": "Fuzzy hybrid simulated annealing algorithms for topology design of switched local area networks"} {"abstract": "we present a variety of the Standard ML module system where parameterized abstract types (i.e. functors returning generative types) map provably equal arguments to compatible abstract types, instead of generating distinct types at each applications as in Standard ML. This extension solves the full transparency problem (how to give syntactic signatures for higher-order functors that express exactly their propagation of type equations), and also provides better support for non-closed code fragments.", "keywords": "fragmentation; ml ;order;applications;generation;systems;abstraction;transparency;code;types;module;standardization;support;extensibility;signature;argument;propagation", "title": "applicative functors and fully transparent higher-order modules"} {"abstract": "We present a semiautomatic image editing framework dedicated to individual structured object replacement from groups. The major technical difficulty is element separation with irregular spatial distribution, hampering previous texture, and image synthesis methods from easily producing visually compelling results. Our method uses the object-level operations and finds grouped elements based on appearance similarity and curvilinear features. This framework enables a number of image editing applications, including natural image mixing, structure preserving appearance transfer, and texture mixing.", "keywords": "natural image;structure analysis;texture;image processing", "title": "ImageAdmixture: Putting Together Dissimilar Objects from Groups"} {"abstract": "In this correspondence, the problem of recursive motion estimation and compensation in image subbands is considered. A pel recursive algorithm is presented for this purpose and it is shown experimentally that the motion can be compensated almost as well as in the original fields. Based on this algorithm, a scalable and recursive video coding scheme is outlined which is compared successfully to a hybrid coding scheme based on block matching.", "keywords": "subband decomposition;motion estimation;motion compensation;pel-recursive", "title": "Pel recursive motion estimation and compensation in subbands"} {"abstract": "Monitoring and measuring the accessibility of government Web sites is an important challenge for regulators and policy makers. Moreover, over the next few years, e-government (e-gov) services are expected to expand and it is necessary to ensure access for everyone. In this paper, we present a metric based approach for evaluating municipalities Web pages using automatic accessibility evaluation tools. The sampling of the pages was done by the tool E-GOVMeter, and the accessibility evaluation and generation of the metrics was done by means of an adaptation of the tool Hera. The results show that much work should be done to improve the accessibility of Brazilian municipalities Web sites. Although it has limitations, the use of automatically generated accessibility metrics is a powerful tool for helping measuring and monitoring the accessibility of e-gov Web sites.", "keywords": "web accessibility evaluation;web accessibility;web metrics;e-government", "title": "an approach based on metrics for monitoring web accessibility in brazilian municipalities web sites"} {"abstract": "When describing a physical object, we indicate which object by pointing and using reference terms, such as 'this' and 'that', to inform the listener quickly of an indicated object's location. Therefore, this research proposes using a three-layer attention-drawing model for humanoid robots that incorporates such gestures and verbal cues. The proposed three-layer model consists of three sub-models: the Reference Term Model (RTM); the Limit Distance Model (LDM); and the Object Property Model (OPM). The RTM selects an appropriate reference term for distance, based on a quantitative analysis of human behaviour. The LDM decides whether to use a property of the object, such as colour, as an additional term for distinguishing the object from its neighbours. The OPM determines which property should be used for this additional reference. Based on this concept, an attention-drawing system was developed for a communication robot named 'Robovie', and its effectiveness was tested.", "keywords": "human-robot interface;human-robot interaction;deictic gestures", "title": "Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model"} {"abstract": "A fully integrated 0.18-(upmu hbox {m}) CMOS LC-tank voltage-controlled oscillator (VCO) suitable for low-voltage and low-power S-band wireless applications is proposed in this paper. In order to meet the requirement of low voltage applications, a differential configuration with two cross-coupled pairs by adopting admittance-transforming technique is employed. By using forward-body-biased metal oxide semiconductor field effect transistors, the proposed VCO can operate at 0.4V supply voltage. Despite the low power supply near threshold voltage, the VCO achieves wide tuning range by using a voltage-boosting circuit and the standard mode PMOS varactors in the proposed oscillator architecture. The simulation results show that the proposed VCO achieves phase noise of (-)120.1dBc/Hz at 1MHz offset and 39.3% tuning range while consuming only (594~upmu hbox {W}) in 0.4V supply. Figure-of-merit with tuning range of the proposed VCO is (-)192.1dB at 3GHz.", "keywords": "voltage-controlled oscillator ;forward-body-biased;admittance-transforming;voltage-boosting;low voltage;low power", "title": "A Low-Voltage and Low-Power 3-GHz CMOS LC VCO for S-Band Wireless Applications"} {"abstract": "Combating Web spam has become one of the top challenges for Web search engines. State-of-the-art spam-detection techniques are usually designed for specific, known types of Web spam and are incapable of dealing with newly appearing spam types efficiently. With user-behavior analyses from Web access logs, a spam page-detection algorithm is proposed based on a learning scheme. The main contributions are the following. (1) User-visiting patterns of spam pages are studied, and a number of user-behavior features are proposed for separating Web spam pages from ordinary pages. (2) A novel spam-detection framework is proposed that can detect various kinds of Web spam, including newly appearing ones, with the help of the user-behavior analysis. Experiments on large-scale practical Web access log data show the effectiveness of the proposed features and the detection framework.", "keywords": "measurement;experimentation;human factors;spam detection;web search engine;user behavior analysis", "title": "Identifying Web Spam with the Wisdom of the Crowds"} {"abstract": "While most of the work on metaphors has focused on conceptual ones, less attention has been paid to the visual metaphors for insight problems. This paper investigates the role of dynamism and realism in visual metaphors for cueing the insight problem solving process. To match the visual-kinesthetic feature of the eight-coin insight problem, the developed metaphors represented the insight cues, both kinetically and kinesthetically. An experimental study showed the superiority of metaphors as realistic and continuous animations over schematic and discrete animations.", "keywords": "visual insight problem;animation;visual metaphors;image schemata.;realism", "title": "image schemata in animated metaphors for insight problem solving"} {"abstract": "We propose an alternative method of machine-aided translation: Structure-Based Machine Translation (SBMT). SBMT uses language structure matching techniques to reduce complicated grammar rules and provide efficient and feasible translation results. SBMT comprises the following four features: (1) source language input sentence analysis; (2) source language sentence transformation into target language structured (3) dictionary lookup: and (4) semantic disambiguation or word sense disambiguation (WSD) for correct output Selection, SBMT has been designed and a prototype system has been implemented that generates satisfactory translations.", "keywords": "machine translation", "title": "English-Thai structure-based machine translation"} {"abstract": "In this paper, we consider a semi-linear wave equation with damping and source terms. Using a potential well method, we prove the existence and uniqueness of global solutions of the wave equation and investigate uniform decay rates of solutions. Moreover, an example is given to illustrate our results.", "keywords": "existence of solution;energy decay;source term;numerical result", "title": "Global existence and uniform decay rates for the semi-linear wave equation with damping and source terms"} {"abstract": "Many media streams consist of distinct objects that repeat. For example, broadcast television and radio signals contain advertisements, call sign jingles, songs, and even whole programs that repeat. The problem we address is to explicitly identify the underlying structure in repetitive streams and de-construct them into their component objects. Our algorithm exploits dimension reduction techniques on the audio portion of a multimedia stream to make search and buffering feasible. Our architecture assumes no a priori knowledge of the streams, and does not require that the repeating objects (ROs) be known. Everything the system needs, including the position and duration of the ROs, is learned on the fly. We demonstrate that it is perfectly feasible to identify in real-time ROs that occur days or even weeks apart in audio or video streams. Both the compute and buffering requirements are comfortably within reach for a basic desktop computer. We outline the algorithms, enumerate several applications and present results from real broadcast streams.", "keywords": "audio fingerprint;low-dimension representation;multimedia;repeats;segmentation", "title": "ARGOS: Automatically extracting repeating objects from multimedia streams"} {"abstract": "We have developed a second-order numerical method, based on the matched interface and boundary (MIB) approach, to solve the NavierStokes equations with discontinuous viscosity and density on non-staggered Cartesian grids. We have derived for the first time the interface conditions for the intermediate velocity field and the pressure potential function that are introduced in the projection method. Differentiation of the velocity components on stencils across the interface is aided by the coupled fictitious velocity values, whose representations are solved by using the coupled velocity interface conditions. These fictitious values and the non-staggered grid allow a convenient and accurate approximation of the pressure and potential jump conditions. A compact finite difference method was adopted to explicitly compute the pressure derivatives at regular nodes to avoid the pressurevelocity decoupling. Numerical experiments verified the desired accuracy of the numerical method. Applications to geophysical problems demonstrated that the sharp pressure jumps on the clast-Newtonian matrix are accurately captured for various shear conditions, moderate viscosity contrasts and a wide range of density contrasts. We showed that large transfer errors will be introduced to the jumps of the pressure and the potential function in case of a large absolute difference of the viscosity across the interface; these errors will cause simulations to become unstable.", "keywords": "navierstokes equations;geophysics;multi-flow;jump conditions;interface method;non-staggered grid;projection method;stability", "title": "A matched interface and boundary method for solving multi-flow NavierStokes equations with applications to geodynamics"} {"abstract": "A small variety of methods and techniques are presented in the literature as solutions to manage requirements elicitation for Web applications. However, the existing state of the art is lacking research regarding practical functioning solutions that would match Web application characteristics. The main concern for this paper is how requirements for Web applications can be elicited. The Viewpoint-Oriented Requirements Definition method (VORD) is chosen for eliciting and formulating Web application requirements in an industrial case study. VORD is helpful because it allows structuring of requirements around viewpoints and formulating very detailed requirements specifications. Requirements were understandable to the client with minimal explanation but failed to capture the business vision, strategy, and daily business operations, and could not anticipate the changes in the business process as a consequence of introducing the Web application within the organisation. The paper concludes by a discussion of how to adapt and extend VORD to suit Web applications.", "keywords": "viewpoint-oriented requirements definition ;web applications ;web requirements engineering ;business strategy", "title": "Eliciting Web application requirements an industrial case study"} {"abstract": "This paper deals with the problem of constructing a Hamiltonian cycle of optimal weight, called TSP. We show that TSP is 2/3-differential approximable and cannot be differential approximable greater than 649/650. Next, we demonstrate that, when dealing with edge-costs 1 and 2, the same algorithm idea improves this ratio to 3/4 and we obtain a differential non-approximation threshold equal to 741/742. Remark that the 3/4-differential approximation result has been recently proved by a way more specific to the 1-, 2-case and with another algorithm in the recent conference, Symposium on Fundamentals of Computation Theory, 2001. Based upon these results, we establish new bounds for standard ratio: 5/6 for Mar TSP[a, 2a] and 7/8 for MaxTSP[1, 2]. We also derive some approximation results on partition graph problems by paths. ", "keywords": "approximation algorithms;differential ratio;performance ratio;analysis of algorithms", "title": "Differential approximation results for the traveling salesman and related problems"} {"abstract": "We use one of the influential quantum game models, the MarinattoWeber model, to investigate quantum Bayesian game. We show that in a quantum Bayesian game which has more than one Nash equilibrium, one equilibrium stands out as the compelling solution, whereas two Nash equilibria seem equally compelling in the classical Bayesian game.", "keywords": "game theory;bayesian game;quantum game;nash equilibrium", "title": "Quantum Bayesian game with symmetric and asymmetric information"} {"abstract": "While most existing sports video research focuses on detecting event from soccer and baseball etc., little work has been contributed to flexible content summarization on racquet sports video, e.g. tennis, table tennis etc. By taking advantages of the periodicity of video shot content and audio keywords in the racquet sports video, we propose a novel flexible video content summarization framework. Our approach combines the structure event detection method with the highlight ranking algorithm. Firstly, unsupervised shot clustering and supervised audio classification are performed to obtain the visual and audio mid-level patterns respectively. Then, a temporal voting scheme for structure event detection is proposed by utilizing the correspondence between audio and video content. Finally, by using the affective features extracted from the detected events, a linear highlight model is adopted to rank the detected events in terms of their exciting degrees. Experimental results show that the proposed approach is effective. ", "keywords": "sports video summarization;scene segmentation;temporal voting strategy;highlight ranking", "title": "A framework for flexible summarization of racquet sports video using multiple modalities"} {"abstract": "An efficient unambiguous stereo matching technique is presented in this paper. Our main contribution is to introduce a new reliability measure to dynamic programming approaches in general. For stereo vision application, the reliability of a proposed match on a scanline is defined as the cost difference between the globally best disparity assignment that includes the match and the globally best assignment that does not include the match. A reliability-based dynamic programming algorithm is derived accordingly, which can selectively assign disparities to pixels when the corresponding reliabilities exceed a given threshold. The experimental results show that the new approach can produce dense (>70 percent of the unoccluded pixels) and reliable (error rate <0.5 percent) matches efficiently (<0.2 sec on a 2GHz P4) for the four Middlebury stereo data sets.", "keywords": "stereo;dynamic programming.", "title": "Unambiguous stereo matching using reliability-based dynamic programming"} {"abstract": "This article presents the improvement of a defect recognition system for wooden boards by using knowledge integration from two expert fields. These two kinds of knowledge to integrate respectively concern wood expertise and industrial vision expertise. First of all, extraction, modelling and integration of knowledge use the Natural Language Information Analysis method (NIAM) to be formalized from their natural language expression. Then, to improve a classical industrial vision system, we propose to use the resulting symbolic model of knowledge to partially build a numeric model of wood defect recognition. This model is created according to a tree structure where each inference engine is a fuzzy rule based inference system. The expert knowledge model previously obtained is used to configure each node of the resulting hierarchical structure. The practical results we obtained in industrial conditions show the efficiency of such an approach.", "keywords": "knowledge integration;niam method;orm model;pattern recognition;fuzzy logic", "title": "Contribution of fuzzy reasoning method to knowledge integration in a defect recognition system"} {"abstract": "In this work, we consider external effects on a coupled shallow water ocean-atmosphere model. The forced shallow water equation due to the wind action is studied through a decomposition technique. Induced free perturbations of the permanent response are identified by using a spectral operator basis that is generated by a dynamical Green function. This later was determined through a spectral technique. By using the semi-lagrangian method, we compute the non-linear response of the full model due to a shear stress that comes from the action of the wind at the ocean surface. ", "keywords": "forced response;decomposition;semi-lagrangian method;dynamic green function;geophysical modelling", "title": "The free surface of a coupled ocean-atmosphere model due to forcing effects"} {"abstract": "An effective inventory replenishment method employed in the supply chain is one of the key factors to achieving low inventory while maintaining high customer delivery performance. The state of demand process is often not directly observed by the decision maker. Thus, in many literatures, the inventory control problem is a composite-state, partially observed Markov decision process (POMDP), which is an appropriate model for a number of dynamic demand problems. In practice, managers often use certainty equivalent control (CEC) policies to solve such a problem. However, in reality, Theory of Constraints (TOC) has brought a practical control policy that almost always provides much better solutions for this problem than the CEC policies commonly used in practice. In this paper, we proposed three different inventory control policies based on TOC buffer management framework, and use simulation approach to compare them with traditional adaptive (s,S,T) policy. The computational results indicate how specific problem characteristics influence the performance of whole system and demonstrate the efficiency of the proposed control policy.", "keywords": "inventory control policy;nonstationary demand;theory of constraints;buffer management", "title": "Research on Inventory Control Policies for Nonstationary Demand based on TOC"} {"abstract": "We consider in this paper a class of Publish-Subscribe (pub-sub) systems called topic-based systems, where users subscribe to topics and are notified on events that belong to those subscribed topics. With the recent flourishing of RSS news syndication, these systems are regaining popularity and are raising new challenging problems. In most of the modern topics-based systems, the events in each topic are delivered to the subscribers via a supporting, distributed, data structure (typically a multicast tree). Since peers in the network may come and go frequently, this supporting structure must be continuously maintained so that \"holes\" do not disrupt the events delivery. The dissemination of events in each topic thus incurs two main costs: (1) the actual transmission cost for the topic events,and (2) the maintenance cost for its supporting structure. This maintenance overhead becomes particularly dominating when a pub-sub system supports a large number of topics with moderate event frequency; a typical scenario in nowadays news syndication scene. The goal of this paper is to devise a method for reducing this maintenance overhead to the minimum. Our aim is not to invent yet another topic-based pub-sub system, but rather to develop a generic technique for better utilization of existing platforms. Our solution is based on a novel distributed clustering algorithm that utilizes correlations between user subscriptions to dynamically group topics together, into virtual topics (called topic-clusters ), andt hereby unifies their supporting structures and reduces costs. Our technique continuously adapts the topic-clusters and the user subscriptions to the system state, and incurs only very minimal overhead. We have implemented our solution in the Tamara pub-sub system. Our experimental study shows this approach to be extremely effective, improving the performance by an order of magnitude.", "keywords": "publish-subscribe;dynamic clustering;peer-to-peer", "title": "boosting topic-based publish-subscribe systems with dynamic clustering"} {"abstract": "With wider availability of low cost multi-view cameras, 3D displays, and broadband communication options, 3D media is destined to move from the movie theater to home and mobile platforms. In the near term, popular 3D media will most likely be in the form of stereoscopic video with associated spatial audio. Recent trials indicate that consumers are willing to watch stereoscopic 3D media on their TVs, laptops, and mobile phones. While it is possible to broadcast 3D stereoscopic media (two-views) over digital TV platforms today, streaming over IP will provide a more flexible approach for distribution of 3D media to users with different connection bandwidths and different 3D displays. In the intermediate term, free-view 3D video and 3DTV with multi-view capture are next steps in the evolution of 3D media technology. Recent free-view 3D auto-stereoscopic displays can display multi-view video, ranging from 5 to 200 views. Transmission of multi-view 3D media, via broadcast or on-demand, to end users with varying 3D display terminals and bandwidths is one of the biggest challenges to realize the vision of bringing 3D media experience to the home and mobile devices. This requires flexible rate-scalable, resolution-scalable, view-scalable, view-selective, and packet-loss resilient transport methods. In this talk, first I will briefly review the state of the art in 3D video formats, coding methods, IP streaming protocols and streaming architectures. We will then take a look at 3D video transport options. There are two main platforms for 3D broadcasting: standard digital television (DTV) platforms and the IP platform. I will summarize the approach of European project DIOMEDES which is developing novel methods for adaptive streaming of multi-view video over a combination of DVB and IP platforms. I will also summarize additional challenges associated with real-time interactive 3D video communications for applications such as 3D telepresence. Finally, open research challenges for the long term vision of haptic video and holographic 3D video will be presented.", "keywords": "media streaming;3dtv;video communication", "title": "3dtv and 3d video communications"} {"abstract": "In the current study, the relationship between objective measurements and subjective experienced comfort and discomfort in using handsaws was examined. Twelve carpenters evaluated five different handsaws. Objective measures of contact pressure (average pressure, pressure area and pressuretime (Pt) integral) in static and dynamic conditions, muscle activity (electromyography) of five muscles of the upper extremity, and productivity were obtained during a sawing task. Subjective comfort and discomfort were assessed using the comfort questionnaire for hand tools and a scale for local perceived discomfort (LPD). We did not find any relationship between muscle activity and comfort or discomfort. The Pt integral during the static measurement (beta=?0.24, p<0.01) was the best predictor of comfort and the pressure area during static measurement was the best predictor of LPD (beta=0.45, p<0.01). Additionally, productivity was highly correlated to comfort (beta=0.31, p<0.01) and discomfort (beta=?0.49, p<0.01).", "keywords": "comfort/discomfort;hand tools;objective measurements", "title": "Association between objective and subjective measurements of comfort and discomfort in hand tools"} {"abstract": "This study is the follow-up to a previous one devoted to soil pore space modelling. In the previous study, we proposed algorithms to represent soil pore space by means of optimal piecewise approximation using simple 3D geometrical primitives: balls, cylinders, cones, etc. In the present study, we use the ball-based piecewise approximation to simulate biological activity. The basic idea for modelling pore space consists in representing pore space using a minimal set of maximal balls (Delaunay spheres) recovering the shape skeleton. In this representation, each ball is considered as a maximal local cavity corresponding to the intuitive notion of a pore as described in the literature. The space segmentation induced by the network of balls (pores) is then used to spatialise biological dynamics. Organic matter and microbial decomposers are distributed within the balls (pores). A valuated graph representing the pore network, organic matter and microorganism distribution is then defined. Microbial soil organic matter decomposition is simulated by updating this valuated graph. The method has been implemented and tested on real data. As far as we know, this approach is the first one to formally link pore space geometry and biological dynamics. The long-term goal is to define geometrical typologies of pore space shape that can be attached to specific biological dynamic properties. This paper is a first attempt to achieve this goal.", "keywords": "3d computer vision;biological dynamics simulation;computed tomography;computational geometry;microbial decomposition;pore space modelling", "title": "Using pore space 3D geometrical modelling to simulate biological activity: Impact of soil structure"} {"abstract": "This paper presents a theoretical model developed for estimating the power, the optical signal to noise ratio and the number of generated carriers in a comb generator, having as a reference the minimum optical signal do noise ratio at the receiver input, for a given fiber link. Based on the recirculating frequency shifting technique, the generator relies on the use of coherent and orthogonal multi-carriers (Coherent-WDM) that makes use of a single laser source (seed) for feeding high capacity (above 100 Gb/s) systems. The theoretical model has been validated by an experimental demonstration, where 23 comb lines with an optical signal to noise ratio ranging from 25 to 33 dB, in a spectral window of similar to 3.5 nm, are obtained.", "keywords": "coherent-wdm;comb generator;energy efficiency;high capacity optical fiber transport;orthogonal frequency division multiplexing;recirculating frequency shifting;spectral efficiency", "title": "Design of a Comb Generator for High Capacity Coherent-WDM Systems"} {"abstract": "MURPHY is a language-independent, experimental methodology for building safety-critical, real time software, which will include an integrated tool set. Using Ada as an example, this paper presents a technique for verifying the safety of complex, real-time software using Software Fault Tree Analysis. The templates for Ada are presented along with an example of applying the technique to an Ada program. The tools in the MURPHY tool set to aid in this type of analysis are described.", "keywords": "tree;software;examples;experimentation;methodology;analysis;language;verification;tool;fault;complexity;paper;safety critical;template;real-time;tools;integrability", "title": "safety verification in murphy using fault tree analysis"} {"abstract": "Polls show a strong decline in public trust of traditional news outlets; however, social media offers new avenues for receiving news content. This experiment used the Facebook API to manipulate whether a news story appeared to have been posted on Facebook by one of the respondent's real-life Facebook friends. Results show that social media recommendations improve levels of media trust, and also make people want to follow more news from that particular media outlet in the future. Moreover, these effects are amplified when the real-life friend sharing the story on social media is perceived as an opinion leader. Implications for democracy and the news business are discussed.", "keywords": "social media;news media effects;experiment;opinion leader;media trust;two-step flow;interpersonal communication", "title": "News Recommendations from Social Media Opinion Leaders: Effects on Media Trust and Information Seeking"} {"abstract": "Modeling of laser-plasma wakefield accelerators in an optimal frame of reference [1] has been shown to produce orders of magnitude speed-up of calculations from first principles. Obtaining these speedups required mitigation of a high-frequency instability that otherwise limits effectiveness. In this paper, methods are presented which mitigated the observed instability, including an electromagnetic solver with tunable coefficients, its extension to accommodate Perfectly Matched Layers and Friedmans damping algorithms, as well as an efficient large bandwidth digital filter. It is observed that choosing the frame of the wake as the frame of reference allows for higher levels of filtering or damping than is possible in other frames for the same accuracy. Detailed testing also revealed the existence of a singular time step at which the instability level is minimized, independently of numerical dispersion. A combination of the techniques presented in this paper prove to be very efficient at controlling the instability, allowing for efficient direct modeling of 10GeV class laser plasma accelerator stages. The methods developed in this paper may have broader application, to other Lorentz-boosted simulations and Particle-In-Cell simulations in general.", "keywords": "laser wakefield acceleration;particle-in-cell;plasma simulation;special relativity;boosted frame;numerical instability", "title": "Numerical methods for instability mitigation in the modeling of laser wakefield accelerators in a Lorentz-boosted frame"} {"abstract": "This article studies online scheduling of equal length jobs with precedence constraints on m parallel batching machines. The jobs arrive over time. The objective is to minimise the total weighted completion time of jobs. Denote the size of each batch by b with b = in the unbounded batching and b in the bounded batching. For the unbounded batching version, we provide an online algorithm with a best possible competitive ratio of m, where m is the positive solution of m+1 - = 1. The algorithm is also best possible when the jobs have identical weights. For the bounded batching version with identical weights of jobs, we provide an online algorithm with a competitive ratio of 2.", "keywords": "online scheduling;precedence constraints;parallel batch", "title": "Online scheduling on batching machines to minimise the total weighted completion time of jobs with precedence constraints and identical processing times"} {"abstract": "The communications characteristics of multiaccess computing are generating new needs for communications. The results of a study of multiaccess computer communications are the topic of this paper. The analyses made are based on a model of the user-computer interactive process that is described and on data that were collected from operating computer systems. Insight into the performance of multiaccess computer systems can be gleaned from these analyses. In this paper emphasis is placed on communications considerations . For this reason, the conclusions presented deal with the characteristics of communications systems and services appropriate for multiaccess computer systems.", "keywords": "interaction;communication;operability;data;process;service;communication systems;systems;model;paper;user;performance;computation", "title": "a study of multiaccess computer communications"} {"abstract": "BIM, Semantic Web and Linked Data are key technologies for construction information. Progress is being made from often too-common ontological concepts to Linked Data. Semantic Web sustainable construction applications are now emerging.", "keywords": "built environment;climate change;linked open data;semantic web", "title": "Trends in built environment semantic Web applications: Where are we today"} {"abstract": "This paper presents a new management method for morphological variation of keywords. The method is called FCG, Frequent Case Generation. It is based on the skewed distributions of word forms in natural languages and is suitable for languages that have either fair amount of morphological variation or are morphologically very rich. The proposed method has been evaluated so far with four languages, Finnish, Swedish, German and Russian, which show varying degrees of morphological complexity.", "keywords": "evaluation;word form generation;monolingual information retrieval;management of morphological variation", "title": "management of keyword variation with frequency based generation of word forms in ir"} {"abstract": "The method for calculating the specific conductivity tensor of an anisotropically conductive medium, proposed in this paper, distinguishes itself by the simplicity of physical measurements: it suffices to make an equally thick rectangle-shaped sample with four electrodes fixed on its sides and to take various measurements of current intensity and differences of potentials. The necessary mathematical calculations can be promptly performed, even without using a complex computing technique. The accuracy of the results obtained depends on the dimensions of the sample and on the ratios of the conductivity tensor components.", "keywords": "modelling;anisotropic media;electrical conductivity;current flow;numerical methods;measurements", "title": "The extension of the van der Pauw method to anisotropic media"} {"abstract": "In this article, the particle swarm optimization algorithm is used to calculate the complex excitations, amplitudes and phases, of the adaptive circular array elements. To illustrate the performance of this method for steering a signal in the desired direction and imposing nulls in the direction of interfering signals by controlling the complex excitation of each array element, two types of arrays are considered. A uniform circular array (UCA) and a planar uniform circular array (PUCA) with 16 elements of half-wave dipoles are examined. Also, the performance of an adaptive array using 3-bit amplitude and 4-bit phase shifters are studied. In our analysis, the method of moments is used to estimate the response of the dipole UCAs in a mutual coupling environment.", "keywords": "smart antennas;adaptive beamforming;method of moments;mutual coupling;uniform circular arrays;particle swarm optimization algorithm;adaptive array", "title": "Analysis of uniform circular arrays for adaptive beamforming applications using particle swarm optimization algorithm"} {"abstract": "In this paper we present a parallel method for solving two-stage stochastic linear programs with restricted recourse. The mathematical model considered here can be used to represent several real-world applications, including financial and production planning problems, for which significant changes in the recourse solutions should be avoided because of their difficulty to be implemented. Our parallel method is based on a primal-dual path-following interior point algorithm, and exploits fruitfully the dual block-angular structure of the constraint matrix and the special block structure of the matrices involved in the restricted recourse model. We describe and discuss both message-passing and shared-memory implementations and we present the numerical results collected on the Origin2000.", "keywords": "stochastic programming;restricted recourse;interior point methods;numa multiprocessor system;pvm;openmp", "title": "Parallel algorithms to solve two-stage stochastic linear programs with robustness constraints"} {"abstract": "Vine pair-copula constructions (PCCs) provide an important milestone for the usage of multivariate copulas to model dependence. At present time PCCs are recognized to be the most flexible class of multivariate copulas. Vine PCCs and semiparametric copula-based dynamic (SCOMDY) models with ARMA-GARCH margins are combined. As building blocks of the PCCs, bivariate t-copulas are used. Exchange rates are considered as an application and their dependence structure is modelled using regular and canonical vines. A non-nested model comparison of the above SCOMDY models is performed using the adapted Voungs test.", "keywords": "multivariate copula;garch-arma margins;exchange rates;pair-copula construction;vines", "title": "SCOMDY models based on pair-copula constructions with application to exchange rates"} {"abstract": "Most of the current proposed routing protocols in delay-tolerant network (DTN) are designed based on the entity mobility. In this article, we consider the routing in DTN with group mobility, which is useful in modeling those cooperative activities. The new proposed routing scheme is called group-epidemic routing (G-ER). G-ER is designed on the basis of one DTN protocol called epidemic routing (ER). In G-ER, two strategies related to the unique characteristics of the group mobility have been proposed to greatly improve ER. The first is to treat each group as a single node and exchange packets between groups instead of individual nodes. Thus, the resource-consuming problem of ER could be much alleviated. In the meantime, exchanging packets between two groups could speed up the packet delivery. The second is the buffer sharing inside a group, which is supported by the cooperative nature in group mobility. Moreover, we specifically propose a group dynamic model for group mobility to realize group splitting and merging. The performance of G-ER is studied by extensive simulations and compared with ER and dynamic source routing (DSR). Results show that G-ER outperforms ER and DSR in different network scenarios even with group dynamics.", "keywords": "delay-tolerant network;group mobility;group-epidemic routing;epidemic routing;group dynamic model", "title": "Routing strategy in disconnected mobile ad hoc networks with group mobility"} {"abstract": "Interactive isosurface extraction has recently become possible through successful efforts to map algorithms such as Marching Cubes (MC) and Marching Tetrahedra (MT) to modern Graphics Processing Unit (GPU) architectures. Other isosurfacing algorithms, however, are not so easily portable to GPUs, either because they involve more complex operations or because they are not based on discrete case tables, as is the case with most marching techniques. In this paper, we revisit the Dual Contouring (MC) and Macet isosurface extraction algorithms and propose, respectively: (i) a novel, efficient and parallelizable version of Dual Contouring and (ii) a set of GPU modules which extend the original Marching Cubes algorithm. Similar to marching methods, our novel technique is based on a case table, which allows for a very efficient GPU implementation. In addition, we enumerate and evaluate several alternatives to implement efficient contouring algorithms on the GPU, and present trade-offs among all approaches. Finally, we validate the efficiency and quality of the tessellations produced in all these alternatives.", "keywords": "isosurfacing;marching cubes;gpu", "title": "Efficient and Quality Contouring Algorithms on the GPU"} {"abstract": "Cellular Automata rules often produce spatial patterns which make them recognizable by human observers. Nevertheless, it is generally difficult, if not impossible, to identify the characteristic(s) that make a rule produce a particular pattern. Discovering rules that produce spatial patterns that a human being would find \"similar\" to another given pattern is a very important task, given its numerous possible applications in many complex systems models. In this paper, we propose a general framework to accomplish this task, based on a combination of Machine Learning strategies including Genetic Algorithms and Artificial Neural Networks. This framework is tested on a 3-values, 6-neighbors, k-totalistic cellular automata rule called the \"burning paper\" rule. Results are encouraging and should pave the way for the use of our framework in real-life complex systems models.", "keywords": "spatial patterns;pattern recognition;rule evolution;machine learning;hybrid learning systems;neural networks;genetic algorithms", "title": "Cellular Automata Pattern Recognition and Rule Evolution Through a Neuro-Genetic Approach"} {"abstract": "Activity patterns of metabolic subnetworks, each of which can be regarded as a biological function module, were focused on in order to clarify biological meanings of observed deviation patterns of gene expressions induced by various chemical stimuli. We tried to infer association structures of genes by applying the multivariate statistical method called graphical Gaussian modeling to the gene expression data in a subnetwork-wise manner. It can be expected that the obtained graphical models will provide reasonable relationships between gene expressions and macroscopic biological functions. In this study, the gene expression patterns in nematodes under various conditions (stresses by chemicals such as heavy metals and endocrine disrupters) were observed using DNA microarrays. The graphical models for metabolic subnetworks were obtained from these expression data. The obtained models (independence graph) represent gene association structures of cooperativities of genes. We compared each independence graph with a corresponding metabolic subnetwork. Then we obtained a pattern that is a set of characteristic values for these graphs, and found that the pattern of heavy metals differs considerably from that of endocrine disrupters. This implies that a set of characteristic values of the graphs can representative a macroscopic biological meaning.", "keywords": "gene expression pattern;graphical gaussian modeling;association structure;metabolic network", "title": "Graphical Gaussian modeling for gene association structures based on expression deviation patterns induced by various chemical stimuli"} {"abstract": "In the last years, SCESM community has studied a number of synthesis approaches that turn scenario descriptions into some kind of state machine. In our story driven modeling approach, the statechart synthesis is done manually. Many other approaches rely on human interaction, too. Frequently, the resulting state machines are just the starting point for further system development. The manual steps and the human interaction and the subsequent development steps are subject to the introduction of errors. Thus, it is not guaranteed that the final implementation still covers the initial scenarios. Therefore, this paper proposes the exploitation of scenarios for the derivation of automatic tests. These tests may be used to force the implementation to implement at least the behavior outlined in the requirements scenarios. In addition, this approach raises the value of formal scenarios for requirements elicitation and analysis since such scenarios are turned into automatic tests that may be used to drive iterative development processes according to test-first principles.", "keywords": "test-first principle;code generation;scenarios", "title": "story driven testing - sdt"} {"abstract": "In this paper we present a method to construct iteratively new bent functions of n+2 variables from bent functions of n variables using minterms of n variables and minterms of two variables. Also, we provide the number of bent functions of n+2 variables that we can obtain with the method here presented.", "keywords": "boolean function;bent function;linear function;balanced function;nonlinearity;truth table;hamming weight;minterm", "title": "ON THE CONSTRUCTION OF BENT FUNCTIONS OF n+2 VARIABLES FROM BENT FUNCTIONS OF n VARIABLES"} {"abstract": "The ability to simulate complex physical situations in real-time is a critical element of any \"virtual world\" scenario, as well as being key for many engineering and robotics applications. Unfortunately the computation cost of standard physical simulation methods increases rapidly as the situation becomes more complex. The result is that even when using the fastest supercomputers we are still able to interactively simulate only small, toy worlds. To solve this problem I propose changing the way we represent and simulate physics in order to reduce the computational complexity of physical simulation, thus making possible interactive simulation of complex situations.", "keywords": "interaction;situated;scenario;order;applications;method;simulation;engine;computation;supercomputer;physical simulation;standardization;physical;robotics;complexity;critic;real-time;cost;computational complexity;virtual world", "title": "computational complexity versus virtual worlds"} {"abstract": "In a ubiquitous environment, there are many applications where a server disseminates information of common interest to pervasive clients and devices. For an example, an advertisement server sends information from a broadcast server to display devices. We propose an efficient information scheduling scheme for information broadcast systems to reduce average waiting time for information access while maintaining fairness between information items. Our scheme allocates information items adaptively according to relative popularity for each local server. Simulation results show that our scheme van reduce the waiting time up to 30% compared with the round robin scheme while maintaining cost-effective fairness. ", "keywords": "two-level broadcasting;fairness;data popularity", "title": "Efficient and fair scheduling for two-level information broadcasting systems"} {"abstract": "This paper defines the restricted growing concept (RGC) for object separation and provides an algorithmic analysis of its implementations. Our concept decomposes the problem of object separation into two stages. First, separation is achieved by shrinking the objects to their cores while keeping track of their originals as masks, Then the core is grown within the masks obeying the guidelines of a restricted growing algorithm. In this paper, we apply RGC to the remote sensing domain, particularly the synthetic aperture radar (SAR) sea ice images.", "keywords": "morphology;object separation;remote sensing imagery;restricted growing", "title": "Separating touching objects in remote sensing imagery: The restricted growing concept and implementations"} {"abstract": "It has been approximately 20years since distributing scholarly journals digitally became feasible. This article discusses the broad implications of the transition to digital distributed scholarship from a historical perspective and focuses on the development of open access (OA) and the various models for funding OA in the context of the roles scholarly journals play in scientific communities.", "keywords": "open access;history;serials", "title": "Digital Distribution of Academic Journals and its Impact on Scholarly Communication: Looking Back After 20Years"} {"abstract": "Research has shown that product reviews on the Internet not only support consumers when shopping, but also lead to increased sales for retailers. Recent approaches successfully use smart phones to directly relate products (e.g. via barcode or RFID) to corresponding reviews, making these available to consumers on the go. However, it is unknown what modality (star ratings/text/video) users consider useful for creating reviews and using reviews on their mobile phone, and how the preferred modalities are different from those on the Web. To shed light on this we conduct two experiments, one of them in a quasi-realistic shopping environment. The results indicate that, in contrast to the known approaches, stars and pre-structured text blocks should be implemented on mobile phones rather than long texts and videos. Users prefer less and rather well-aggregated product information while on the go. This accounts both for entering and, surprisingly, also for using product reviews.", "keywords": "product recommendations;mobile interaction;product ratings;user interfaces;product reviews;mobile applications", "title": "an evaluation of product review modalities for mobile phones"} {"abstract": "The IEEE 802.11 distributed coordination function (DCF) provides a contention-based distribution channel access mechanism for stations to share the wireless medium. However, performance of the DCF drops dramatically due to high collision probability as the number of active stations becomes larger. In this paper, we propose a simple and effective collision resolution scheme for improving the performance of the DCF mechanism. Our idea is based on the estimation of the channels contention level, by measuring duration of busy and idle periods observed on the channel at each station. In order to reduce collision probability, the proposed scheme limits the number of contending stations at the same time according to the channel contention level. Performance of the proposed scheme is investigated by numerical analysis and simulation. Our results show that the proposed scheme is very effective and improves the performance under a wide range of contention levels.", "keywords": "backoff algorithm;collision resolution;dcf;mac;wireless lan", "title": "A distributed collision resolution scheme for improving the performance in wireless LANs"} {"abstract": "It was shown that most of the radio frequency spectrum was inefficiently utilized. To fully use these spectrums, cognitive radio networks have been proposed. The idea is to allow secondary users to use a spectrum if the primary user (i.e., the legitimate owner of the spectrum) is not using it. To achieve this, secondary users should constantly monitor the usage of the spectrum to avoid interference with the primary user. However, achieving a trustworthy monitoring is not easy. A malicious secondary user who wants to gain an unfair use of a spectrum can emulate the primary user, and can thus trick the other secondary users into believing that the primary user is using the spectrum when it is not. This attack is called the Primary User Emulation (PUE) attack. To prevent this attack, there should be a way to authenticate primary users' spectrum usage. We propose a method that allows primary users to add a cryptographic link signature to its signal so the spectrum usage by primary users can be authenticated. This signature is added to the signal in a transparent way, such that the receivers (who do not care about the signature) still function as usual, while the cognitive radio receivers can retrieve the signature from the signal. We describe two schemes to add a signature, one using modulation, and the other using coding. We have analyzed the performance of both schemes.", "keywords": "primary user emulation attack;cognitive radio networks;physical-layer authentication", "title": "cryptographic link signatures for spectrum usage authentication in cognitive radio"} {"abstract": "One of the difficulties in understanding and debugging spreadsheets is due to the invisibility of the data flow structure which is associated with cell formulas. In this paper, we present a spreadsheet visualization approach that is mainly based on the Markov Clustering (MCL) algorithm in an attempt to help spreadsheet users understand and debug their spreadsheets. The MCL algorithm helps in visualizing large graphs by generating clusters of cells. In our visualization approach, we also use compound fisheye views and treemaps to help in the navigation of the generated clusters. Compound fish eye views help to view members of a particular cluster while showing their linkages with other clusters. Treemaps help to visualize the depth we are at while navigating a cluster tree. Our initial experiments show that graph-based spreadsheet visualization using the MCL algorithm generates clusters which match with the corresponding logical areas of a given spreadsheet. Our experiments also show that analysis of the clusters helps us to identify some errors in the spreadsheets.", "keywords": "visualization;visual programming;spreadsheets;mcl algorithm;end-user software engineering", "title": "an end-user oriented graph-based visualization for spreadsheets"} {"abstract": "CAPP systems play a relevant role in aiding planners during setup planning, operation sequencing and pallet configuration activities. The support and automation granted by these techniques, together with the use of non-linear process planning logic, lead to a reduction in the planning time and costs, thus making manufacturers more competitive. This paper presents an approach that integrating process and production planning leads to the definition at the shop-floor level of the optimal operation sequence to machine all of the workpieces on a pallet using a four-axis machine tool. Part programs of non-production movements for each possible sequence of two operations are automatically generated at the shop-floor level and are simulated to obtain the non-production time. The complete sequence of operations is then defined on the basis of the minimisation of the estimated non-production time. This minimisation is performed using a mathematical model that defines a good sequence of operations. Four algorithms are adopted to analyse the proposed solution and to reduce the gap from optimality. The approach is tested on some cases taken from literature and on a real case. The real case was provided by a company that produces mechanical components. The obtained results underline a reduction on production and planning time, and consequently an increment in the company profit.", "keywords": "computer aided process planning;operation sequencing;network part program", "title": "Pallet operation sequencing based on network part program logic"} {"abstract": "In the presence of significant direction-of-arrival (DOA) mismatch, existing robust Capon beamformers based on the uncertainty set of the steering vector require a large size of uncertainty set for providing sufficient robustness against the increased mismatch. Under such circumstance, however, their output signal-to-interference-plus-noise ratios (SINRs) degrade. In this paper, a new robust Capon beamformer is proposed to achieve robustness against large DOA mismatch. The basic idea of the proposed method is to express the estimate of the desired steering vector corresponding to the signal of interest (SOI) as a linear combination of the basis vectors of an orthogonal subspace, then we can easily obtain the estimate of the desired steering vector by rotating this subspace. Different from the uncertainty set based methods, the proposed method does not make any assumptions on the size of the uncertainty set. Thus, compared to the uncertainty set based robust beamformers, the proposed method achieves a higher output SINR performance by preserving its interference-plus-noise suppression abilities in the presence of large DOA mismatch. In addition, computationally efficient online implementation of the proposed method has also been developed. Computer simulations demonstrate the effectiveness and validity of the proposed method.", "keywords": "capon beamformer;robust adaptive beamformer;doa mismatch;robustness", "title": "Robust Capon beamforming against large DOA mismatch"} {"abstract": "Functional magnetic resonance imaging (fMRI) data are originally acquired as complex-valued images, which motivates the use of complex-valued data analysis methods. Due to the high dimension and high noise level of fMRI data, order selection and dimension reduction are important procedures for multivariate analysis methods such as independent component analysis (ICA). In this work, we develop a complex-valued order selection method to estimate the dimension of signal subspace using information-theoretic criteria. To correct the effect of sample dependence to information-theoretic criteria, we develop a general entropy rate measure for complex Gaussian random process to calibrate the independent and identically distributed (i.i.d.) sampling scheme in the complex domain. We show the effectiveness of the approach for order selection on both simulated and actual fMRI data. A comparison between the results of order selection and ICA on real-valued and complex-valued fMRI data demonstrates that a fully complex analysis extracts more meaningful components about brain activation.", "keywords": "order selection;complex-valued fmri;linear mixing model;i.i.d. sampling;entropy rate", "title": "Order Selection of the Linear Mixing Model for Complex-Valued FMRI Data"} {"abstract": "We study the optimal approximation of the solution of an operator equation A(u) = f by certain n-term approximations with respect to specific classes of frames. We consider worst case errors, where f is an element of the unit ball of a Sobolev or Besov space B-q(1)(L-p(Omega)) and Omega subset of R-d is abounded Lipschitz domain; the error is always measured in the H-s-norm. We study the order of convergence of the corresponding nonlinear frame widths and compare it with several other approximation schemes. Our main result is that the approximation order is the same as for the nonlinear widths associated with Riesz bases, the Gelfand widths, and the manifold widths. This order is better than the order of the linear widths iff p < 2. The main advantage of frames compared to Riesz bases, which were studied in our earlier papers, is the fact that we can now handle arbitrary bounded Lipschitz domains-also for the upper bounds. ", "keywords": "elliptic operator equation;worst case error;frames;nonlinear approximation methods;best n-term approximation;manifold width;besov spaces on lipschitz domains", "title": "Optimal approximation of elliptic problems by linear and nonlinear mappings III: Frames"} {"abstract": "Efficient task scheduling on heterogeneous distributed computing systems (HeDCSs) requires the consideration of the heterogeneity of processors and the inter-processor communication. This paper presents a two-phase algorithm, called H2GS, for task scheduling on HeDCSs. The first phase implements a heuristic list-based algorithm, called LDCP, to generate a high quality schedule. In the second phase, the LDCP-generated schedule is injected into the initial population of a customized genetic algorithm, called GAS, which proceeds to evolve shorter schedules. GAS employs a simple genome composed of a two-dimensional chromosome. A mapping procedure is developed which maps every possible genome to a valid schedule. Moreover, GAS uses customized operators that are designed for the scheduling problem to enable an efficient stochastic search. The performance of each phase of H2GS is compared to two leading scheduling algorithms, and H2GS outperforms both algorithms. The improvement in performance obtained by H2GS increases as the inter-task communication cost increases.", "keywords": "genetic algorithms;task scheduling;list-based scheduling heuristics;directed acyclic graph;parallel and distributed processing;heterogeneous systems", "title": "A hybrid heuristicgenetic algorithm for task scheduling in heterogeneous processor networks"} {"abstract": "This paper first introduces three simple and effective image features - the color moment (CM), the color variance of adjacent pixels (CVAP) and CM-CVAP. The CM feature delineates the color-spatial information of images, and the CVAP feature describes the color variance of pixels in an image. However, these two features can only characterize the content of images in different ways. This paper hence provides another feature CM-CVAP, which combines both, to raise the quality of similarity measure. The experimental results show that the image retrieval method based on the CM-CVAP feature gives quite an impressive performance.", "keywords": "color-based image retrieval;color histogram;mass moment preserving", "title": "A color image retrieval method based on color moment and color variance of adjacent pixels"} {"abstract": "We outline the development of an interactive self-service healthcare kiosk. We apply a formal methodology to guarantee the measurement accuracy. The formal, generalizable and rigorous approach shows its practical efficiency. We know of no other studies that apply such methods in designing interaction. There is globally an increasing need for the health technologies outlined herein.", "keywords": "self-service healthcare kiosk;measurement accuracy;parameter identification", "title": "Designing and optimizing a healthcare kiosk for the community"} {"abstract": "Pose recovery with autoencoder is imposed locality reservation with Laplacian matrix. The construction of Laplacian matrix is improved by using hypergraph optimization.", "keywords": "human pose recovery;deep learning;manifold regularization;hypergraph;patch alignment framework", "title": "Hypergraph regularized autoencoder for image-based 3D human pose recovery"} {"abstract": "High Dynamic Range (HDR) images have been widely applied in daily applications. However, HDR image is a special format, which needs to be pre-processed known as tone mapping operators for display. Since the visual quality of HDR images is very sensitive to luminance value variations, conventional watermarking methods for low dynamic range (LDR) images are not suitable and may even cause catastrophic visible distortion. Currently, few methods for HDR image watermarking are proposed. In this paper, two watermarking schemes targeting HDR images are proposed, which are based on p-Law and bilateral filtering, respectively. Both of the subjective and objective qualities of watermarked images are greatly improved by the two methods. What's more, these proposed methods also show higher robustness against tone mapping operations.", "keywords": "hdr image;watermarking;tone mapping;mu-law;bilateral filtering", "title": "Watermarking for HDR Image Robust to Tone Mapping"} {"abstract": "Due to the increasing competition of globalization, selection of the most appropriate personnel is one of the key factors for an organizations success.The importance and complexity of the personnel selection problem call for the method combining both subjective and objective assessments rather than just subjective decisions. The aim of this paper is to develop a new method for solving the decision making process. An intuitionistic fuzzy multi-criteria group decision making method with grey relational analysis (GRA) is proposed. Intuitionistic fuzzy weighted averaging (IFWA) operator is utilized to aggregate individual opinions of decision makers into a group opinion. Intuitionistic fuzzy entropy is used to obtain the entropy weights of the criteria. GRA is applied to the ranking and selection of alternatives. A numerical example for personnel selection is given to illustrate the proposed method finally.", "keywords": "personnel selection;grey relational analysis ;multi-criteria group decision making;intuitionistic fuzzy set ", "title": "A GRA-based intuitionistic fuzzy multi-criteria group decision making method for personnel selection"} {"abstract": "During time-critical brain surgery, the detection of developing cerebral ischemia is particularly important because early therapeutic intervention may reduce the mortality of the patient. The purpose of this system is to provide an efficient means of remote teleconsultation for the early detection of ischemia, particularly when subspecialists are unavailable. The hardware and software design architecture for the multimedia brain function teleconsultation system including the dedicated brain function monitoring system is described. In order to comprehensively support remote teleconsultation, multi-media resources needed for ischemia interpretation were included: EEG signals, CSA, CD-CSA, radiological images, surgical microscope video images and video conferencing. PC-based system integration with standard interfaces and the operability over the Ethernet meet the cost-effectiveness while the modular software was customized with a diverse range of data manipulations and control functions necessary for shared workspace and standard interfaces.", "keywords": "brain function monitoring;teleconsultation;multimedia", "title": "Design of a PC-based multimedia telemedicine system for brain function teleconsultation"} {"abstract": "The time to market is a major concern in the high-technology industry and when designing new products, the development cycle time becomes critical. Indeed, when a delay occurs in the development schedule, the potential market share of the designed product can be drastically decreased. In this context, developing accelerated stress testing (AST) in order to assess quickly the long-term behavior of a semiconductor becomes extremely useful. In this paper we show an example of how thermal characterization including simulation can be used to define a consistent AST for power ICs.", "keywords": "semiconductor industry;multi-pulse testing;energy pulse characterization", "title": "Thermal characterization of LDMOS transistors for accelerating stress testing"} {"abstract": "In this paper, the steady flow and heat transfer of a magnetohydrodynamic fluid is studied. The fluid is assumed to be electrically conducting in the presence of a uniform magnetic field and occupies the porous space in annular pipe. The governing nonlinear equations are modeled by introducing the modified Darcy's law obeying the Sisko model. The system is solved using the homotopy analysis method (HAM), which yields analytical solutions in the form of a rapidly convergent infinite series. Also, HAM is used to obtain analytical solutions of the problem for noninteger values of the power index. The resulting problem for velocity field is then numerically solved using an iterative method to show the accuracy of the analytic solutions. The obtained solutions for the velocity and temperature fields are graphically sketched and the salient features of these solutions are discussed for various values of the power index parameter. We also present a comparison between Sisko and Newtonian fluids. ", "keywords": "mhd sisko fluid;heat transfer;porous medium", "title": "Steady flow and heat transfer of a magnetohydrodynamic Sisko fluid through porous medium in annular pipe"} {"abstract": "Most prior work on information extraction has focused on extracting information from text in digital documents. However, often, the most important information being reported in an article is presented in tabular form in a digital document. If the data reported in tables can be extracted and stored in a database, the data can be queried and joined with other data using database management systems. In order to prepare the data source for table search, accurately detecting the table boundary plays a crucial role for the later table structure decomposition. Table boundary detection and content extraction is a challenging problem because tabular formats are not standardized across all documents. In this paper, we propose a simple but effective preprocessing method to improve the table boundary detection performance by considering the sparse-line property of table rows. Our method easily simplifies the table boundary detection problem into the sparse line analysis problem with much less noise. We design eight line label types and apply two machine learning techniques, Conditional Random Field (CRF) and Support Vector Machines (SVM), on the table boundary detection field. The experimental results not only compare the performances between the machine learning methods and the heuristics-based method, but also demonstrate the effectiveness of the sparse line analysis in the table boundary detection.", "keywords": "sparse line property;table boundary detection;support vector machine;table data collection;table labeling;conditional random field", "title": "identifying table boundaries in digital documents via sparse line detection"} {"abstract": "This paper proposed a new improved method for back propagation neural network, and used an efficient method to reduce the dimension and improve the performance. The traditional back propagation neural network (BPNN) has the drawbacks of slow learning and is easy to trap into a local minimum, and it will lead to a poor performance and efficiency. In this paper, we propose the learning phase evaluation back propagation neural network (LPEBP) to improve the traditional BPNN. We adopt a singular value decomposition (SVD) technique to reduce the dimension and construct the latent semantics between terms. Experimental results show that the LPEBP is much faster than the traditional BPNN. It also enhances the performance of the traditional BPNN. The SVD technique cannot only greatly reduce the high dimensionality but also enhance the performance. So SVD is to further improve the document classification systems precisely and efficiently.", "keywords": "document classification;singular value decomposition;bpnn;lpebp", "title": "An efficient document classification model using an improved back propagation neural network and singular value decomposition"} {"abstract": "A method of assisting a user in finding the required documents effectively is proposed. A user being informed which documents are worth examining can browse in a digital library (DL) in a linear fashion. Computational evaluations were carried out, and a DL and its navigator are designed and constructed.", "keywords": "learning;relevant information;browsing assistant", "title": "browsing in a digital library collecting linearly arranged documents"} {"abstract": "From the nano/micro-manipulation domain to the intervention in the nuclear field, haptics has become essential in actual teleoperation systems. The active property of this modality makes gestures more reliable and accurate. Nowadays, there is much research concerning the integration of haptic modality. The proposed solutions give, according to the adopted approaches, various efficiency levels. However, this work doesn't integrate generally and systematically psychophysics studies and ergonomics considerations. These elements are very important if we want to include effectively the human operator in any teleoperation system. Otherwise, various new applications present to the human operator several unfamiliar phenomena (e.g., nano-environments, underwater environments and the outer space environments). It's thus necessary in this type of application to transform virtually the remote environment to present to the operator nature-like haptic interactions. In this paper, we present a framework for building haptic interactions for teleoperation systems. This framework integrates the psychophysics studies and ergonomics elements, and presents a method to project the remote environment in the intuitive perception space.", "keywords": "teleoperation;haptic interaction", "title": "a framework for building haptic interactions for teleoperation systems"} {"abstract": "The lattice L of upper semicontinuous convex normal functions with convolution ordering arises in studies of type-2 fuzzy sets. In 2002, Kawaguchi and Miyakoshi [Extended t-norms as logical connectives of fuzzy truth values, Multiple-Valued Logic 8(1) (2002) 53-69] showed that this lattice is a complete Heyting algebra. Later. Harding et al. [Lattices of convex, normal functions, Fuzzy Sets and Systems 159 (2008) 1061-1071] gave an improved description of this lattice and showed it was a continuous lattice in the sense of Gierz et al. [A Compendium of Continuous Lattices, Springer, Berlin, 1980]. In this note we show the lattice L(u) is isomorphic to the lattice of decreasing functions from the real unit interval [0, 1] to the interval [0,2] under pointwise ordering, modulo equivalence almost everywhere. This allows development of further properties of L(u). It is shown that L(u) is completely distributive, is a compact Hausdorff topological lattice whose topology is induced by a metric, and is self-dual via a period two antiautomorphism. We also show the lattice L(u) has another realization of natural interest in studies of type-2 fuzzy sets. It is isomorphic to a quotient of the lattice L of all convex normal functions under the convolution ordering. This quotient identifies two convex normal functions if they agree almost everywhere and their intervals of increase and decrease agree almost everywhere. ", "keywords": "type-2 fuzzy set;uniquely complemented lattice;complete lattice;continuous lattice;metric topology", "title": "Convex normal functions revisited"} {"abstract": "The ever-increasing volume of spatial data has greatly challenged our ability to extract useful but implicit knowledge from them. As an important branch of spatial data mining, spatial outlier detection aims to discover the objects whose non-spatial attribute values are significantly different from the values of their spatial neighbors. These objects, called spatial outliers, may reveal important phenomena in a number of applications including traffic control, satellite image analysis, weather forecast, and medical diagnosis. Most of the existing spatial outlier detection algorithms mainly focus on identifying single attribute outliers and could potentially misclassify normal objects as outliers when their neighborhoods contain real spatial outliers with very large or small attribute values. In addition, many spatial applications contain multiple non-spatial attributes which should be processed altogether to identify outliers. To address these two issues, we formulate the spatial outlier detection problem in a general way, design two robust detection algorithms, one for single attribute and the other for multiple attributes, and analyze their computational complexities. Experiments were conducted on a real-world data set, West Nile virus data, to validate the effectiveness of the proposed algorithms.", "keywords": "algorithm;outlier detection;spatial data mining", "title": "On detecting spatial outliers"} {"abstract": "We explore the problems derived from aggregating quantitative opinions in reputation. We propose a mechanism that aggregates opinions based on pairwise comparisons. We show the suitability of the mechanism with an evaluation on real data sets.", "keywords": "reputation;pairwise elicitation;tournaments;social networks", "title": "From blurry numbers to clear preferences: A mechanism to extract reputation in social networks"} {"abstract": "The distribution of the cross correlation between the ternary m-sequence {s(t)} of period n = 3(m) - 1 and the decimated sequences {s(dt)} and {s(dt+1)} ofperiod (3(m) - 1)/2, where d = (2)/(3k+1) with k odd and gcd (k, m) = 1 is determined. The method to find this distribution is related to the result by Coulter and Matthews that f(x) = x(d) is a planar function over GF(3(m)).", "keywords": "cross-correlation function;m-sequences;planar functions", "title": "On the correlation distribution of the Coulter-Matthews decimation"} {"abstract": "Field failure data often exhibit extra heterogeneity as early failure data may have quite different distribution characteristics from later failure data. These infant failures may come from a defective subpopulation instead of the normal product population. Many exiting methods for field failure analyses focus only on the estimation for a hypothesized mixture model, while the model identification is ignored. This paper aims to develop efficient, accurate methods for both detecting data heterogeneity, and estimating mixture model parameters. Mixture distribution detection is achieved by applying a mixture detection plot (MDP) on field failure observations. The penalized likelihood method, and the expectation-maximization (EM) algorithm are then used for estimating the components in the mixture model. Two field datasets are employed to demonstrate and validate the proposed approach.", "keywords": "expectation maximization;infant mortality;mixture detection plot;mixture distribution", "title": "A Graphical Technique and Penalized Likelihood Method for Identifying and Estimating Infant Failures"} {"abstract": "Service composition allows multimedia services to be automatically composed from atomic service components based on dynamic service requirements. Previous work falls short for distributed multimedia service composition in terms of scalability, flexibility and quality-of-service (QoS) management. In this paper, we present a fully decentralized service composition framework, called SpiderNet, to address the challenges. SpiderNet provides statistical multiconstrained QoS assurances and load balancing for service composition. Moreover, SpiderNet supports directed acyclic graph composition topologies and exchangeable composition orders. We have implemented a prototype of SpiderNet and conducted experiments on both wide-area networks and a simulation testbed. Our experimental results show the feasibility and efficiency of the SpiderNet service composition framework.", "keywords": "middleware;quality-of-service ;service composition;service overlay network", "title": "Distributed multimedia service composition with statistical QoS assurances"} {"abstract": "Deflection yoke (DY) is one of the core components of a cathode ray tube (CRT) in a computer monitor or a television that determines the image quality. Once a DY anomaly is found from beam patterns on a display in the production line of CRTs, the remedy process should be performed through three steps: identifying misconvergence types from the anomalous display pattern, adjusting manufacturing process parameters, and fine tuning. This study focuses on discovering a classifier for the identification of DY misconvergence patterns by applying a coevolutionary classification method. The DY misconvergence classification problems may be decomposed into two subproblems, which are feature selection and classifier adaptation. A coevolutionary classification method is designed by coordinating the two subproblems, whose performances are affected by each other. The proposed method establishes a group of partial sub-regions, defined by regional feature set, and then fits a finite number of classifiers to the data pattern by using a genetic algorithm in every sub-region. A cycle of the cooperation loop is completed by evolving the sub-regions based on the evaluation results of the fitted classifiers located in the corresponding sub-regions. The classifier system has been tested with real-field data acquired from the production line of a computer monitor manufacturer in Korea, showing superior performance to other methods such as k-nearest neighbors, decision trees, and neural networks.", "keywords": "deflection yoke;pattern classification;feature selection", "title": "A classifier learning system using a coevolution method for deflection yoke misconvergence pattern classification problem"} {"abstract": "The paper introduces an energy function based fuzzy tuning method for the controller parameters of an HVDC transmission link. The test system, a point to point DC link, was subjected to various small and large disturbances to examine the effectiveness of the proposed method. The DC current error and its derivative are taken as the two principal signals to generate the change in the proportional and the integral gains of the rectifier current regulator according to a fuzzy rule base. Computer simulation results confirm the superiority of the proposed adaptive fuzzy controllers over the conventional fixed gain controllers in damping out the transient oscillations in HVDC links connected to weak AC systems.", "keywords": "fuzzy tuning controller;computer simulation;fixed gain controller", "title": "Design of an energy function based fuzzy tuning controller for HVDC links"} {"abstract": "The Anx7 gene codes for a Ca2+/GTPase with calcium channel and membrane fusion properties that has been proposed to regulate exocytotic secretion in chromaffin and other cell types. We have previously reported that the homozygous Anx7 (+/?) knockout mouse has an embryonically lethal phenotype. However, the viable heterozygous Anx7 (+/?) mouse displays a complex phenotype that includes adrenal gland hypertrophy, chromaffin cell hyperplasia, and defective IP3 receptor (IP3R) expression. To search for a molecular basis for this phenotype, we have used cDNA microarray technology and have challenged control and mutant mice with fed or fasting conditions. We report that in the absence of the Anx7/IP3R signaling system, the cells in the adrenal gland are unable to discriminate between the fed and fasted states, in vivo. In control chromaffin cells, fasting is accompanied by an increased expression of structural genes for chromaffin cell contents, including chromogranin A and B, and D?H. There are also genes whose expression is specifically reduced. However, the Anx7 (+/?) mutation results in sustained expression of these nutritionally sensitive genes. We hypothesize that the calcium signaling defect due to the missing IP3R may be responsible for the global effects of the mutation on nutritionally sensitive genes. We further hypothesize that the tonically elevated expression of chromogranin A, a reportedly master control switch for dense core granule formation, may contribute to the process driving glandular hypertrophy and chromaffin cell hyperplasia in the Anx7 (+/?) mutant mouse.", "keywords": "annexin 7;synexin;adrenal medulla;mutation;chromaffin cell;nutriomics", "title": "Influence of the Anx7 (+/?) Knockout Mutation and Fasting Stress on the Genomics of the Mouse Adrenal Gland"} {"abstract": "Query-based sampling is a method of discovering the contents of a text database by submitting queries to a search engine and observing the documents returned. In prior research sampled documents were used to build resource descriptions for automatic database selection, and to build a centralized sample database for query expansion and result merging. An unstated assumption was that the associated storage costs were acceptable.When sampled documents are long, storage costs can be large. This paper investigates methods of pruning long documents to reduce storage costs. The experimental results demonstrate that building resource descriptions and centralized sample databases from the pruned contents of sampled documents can reduce storage costs by 54-93% while causing only minor losses in the accuracy of distributed information retrieval.", "keywords": "distributed information retrieval;merging;association;centrality;document pruning;paper;sampling;prune;select;contention;research;method;accuracy;experimentation;search engine;query-expansion;storage;text;demonstrate;queries;distributed;documentation;database;resource;query;information retrieval", "title": "pruning long documents for distributed information retrieval"} {"abstract": "As multi-core processors are becoming common, vendors are starting to explore trade offs between the die size and the number of cores on a die, leading to heterogeneity among cores on a single chip. For efficient utilization of these processors, application threads must be assigned to cores such that the resource needs of a thread closely matches resource availability at the assigned core. Current methods of thread-to-core assignment often require an application's execution trace to determine its runtime properties. These traces are obtained by running the application on some representative input. A problem is that developing these representative input sets is time consuming, and requires expertise that the user of a general-purpose processor may not have. We propose an approach for automatic thread-to-core assignment for heterogeneous multicore processors to address this problem. The key insight behind our approach is simple - if two phases of a program are similar, then the data obtained by dynamic monitoring of one phase can be used to make scheduling decisions about other similar phases. The technical underpinnings of our approach include: a preliminary static analysis-based approach for determining similarity among program sections, and a thread-to-core assignment algorithm that utilizes the statically generated information as well as execution information obtained from monitoring a small fraction of the program to make scheduling decisions.", "keywords": "thread-to-core assignment;static program analysis;heterogeneous multi-core processors;phase behavior", "title": "predictive thread-to-core assignment on a heterogeneous multi-core processor"} {"abstract": "In this paper we consider some methods for the maximum likelihood estimation of sparse Gaussian graphical (covariance selection) models when the number of variables is very large (tens of thousands or more). We present a procedure for determining the pattern of zeros in the model and we discuss the use of limited memory quasi-Newton algorithms and truncated Newton algorithms to fit the model by maximum likelihood. We present efficient ways of computing the gradients and likelihood function values for such models suitable for a desktop computer. For the truncated Newton method we also present an efficient way of computing the action of the Hessian matrix on an arbitrary vector which does not require the computation and storage of the Hessian matrix. The methods are illustrated and compared on simulated data and applied to a real microarray data set. The limited memory quasi-Newton method is recommended for practical use.", "keywords": "covariance selection;gene networks;graphical models;high dimensional;large scale optimisation;limited memory quasi-newton", "title": "Fitting very large sparse Gaussian graphical models"} {"abstract": "This paper deals with a straightforward and effective solution that isolates tiny objects from very poor-quality angiogenesis images. The objects of interest consist of the cross-section of blood vessels present in histological cuts of malign tumors that grow in soft parts of the human body through a natural process known as angiogenesis. The proposed strategy applies a conditional morphological closing operator using a structuring element based on criteria resulting from local statistical properties. This approach gives in all cases a lower percent of false target count (FTC) and false non-target count (FNTC) errors, with respect to the error equally calculated for two other strategies discussed briefly in this paper, when the results are compared with images segmented manually by pathologists.", "keywords": "angiogenesis;blood vessel segmentation;conditional closing;noise filtering;morphology", "title": "Segmentation of tiny objects in very poor-quality angiogenesis images"} {"abstract": "The traditional distinction between primary (observation independent) and secondary (observation dependent) qualities is not based on a difference that can be sustained in the full light of contemporary scientific understanding. An alternative division of physical and chemical properties is proposed. Like the traditional division of qualities, the alternative system has two main categories. Properties of compound particulars that result from simple combination (e.g., addition) of the properties of their component parts constitute the first class", "keywords": "properties;qualities;mereology;primary qualities;chemical combination;mixis;cooperative interactions;autocatalysis;dissipative structures;mixed valence;ivct", "title": "Varieties of Properties"} {"abstract": "We consider the problem of computing Byzantine Agreement in a synchronous network with n processors each with a private random string, where each pair of processors is connected by a private communication line. The adversary is malicious and non-adaptive, i.e., it must choose the processors to corrupt at the start of the algorithm. Byzantine Agreement is known to be computable in this model in an expected constant number of rounds.We consider a scalable model where in each round each uncorrupted processor can send to any set of log n other processors and listen to any set of log n processors. We define the loss of a computation to be the number of uncorrupted processors whose output does not agree with the output of the majority of uncorrupted processors. We show that if there are t corrupted processors, then any protocol which has probability at least 1/2 +1/log n of loss less than t 2/3 32 fn 1/3 log 5/3 n requires at least f rounds.", "keywords": "probabilistic;scalable;byzantine agreement;malicious adversary;non-adaptive adversary;randomized;distributed computing;lower bounds", "title": "lower bound for scalable byzantine agreement"} {"abstract": "Autonomous grasping is an important but challenging task and has therefore been intensively addressed by the robotics community. One of the important issues is the ability of the grasping device to accommodate varying object shapes in order to form a stable, multi-point grasp. Particularly in the human environment, where robots are faced with a vast set of objects varying in shape and size, a versatile grasping device is highly desirable. Solutions to this problem have often involved discrete continuum structures that typically comprise of compliant sections interconnected with mechanically rigid parts. Such devices require a more complex control and planning of the grasping action than intrinsically compliant structures which passively adapt to complex shapes objects. In this paper, we present a low-cost, soft cable-driven gripper, featuring no stiff sections, which is able to adapt to a wide range of objects due to its entirely soft structure. Its versatility is demonstrated in several experiments. In addition, we also show how its compliance can be passively varied to ensure a compliant but also stable and safe grasp.", "keywords": "grasping;soft robotics;continuum robot;variable compliance;shape invariant grasping", "title": "A variable compliance, soft gripper"} {"abstract": "In this paper, we explore techniques that aim to improve site understanding for outdoor Augmented Reality (AR) applications. While the first person perspective in AR is a direct way of filtering and zooming on a portion of the data set, it severely narrows overview of the situation, particularly over large areas. We present two interactive techniques to overcome this problem: multi-view AR and variable perspective view. We describe in details the conceptual, visualization and interaction aspects of these techniques and their evaluation through a comparative user study. The results we have obtained strengthen the validity of our approach and the applicability of our methods to a large range of application domains.", "keywords": "information interfaces and presentation;mobile augmented reality;multi-perspective views;situation awareness;navigation", "title": "Extended Overview Techniques for Outdoor Augmented Reality"} {"abstract": "We have performed quantum chemical calculations for the MCCBra (TM) a (TM) a (TM) NCH and HCCBra (TM) a (TM) a (TM) NCM' (M, M' = Cu, Ag, and Au) halogen-bonded complexes at the MP2 level. The results showed that the transition metals have different influences on the halogen bond donor and the electron donor. The transition metal atom in the former makes the halogen bond weaker, and that in the latter causes it to enhance. Molecular electrostatic potential and natural bond orbital analysis were carried out to reveal the nature of the substitution.", "keywords": "electron donor;halogen bond donor;molecular electrostatic potential;nbo;transition metal", "title": "Influence of transition metals on halogen-bonded complexes of MCCBr center dot center dot center dot NCH and HCCBra center dot center dot center dot NCM ' (M, M ' = Cu, Ag, and Au)"} {"abstract": "We provide models for evaluating the performance, cost and power consumption of different architectures suitable for a metropolitan area network (MAN). We then apply these models to compare today's synchronous optical network/synchronous digital hierarchy metro rings with different alternatives envisaged for next-generation MAN: an Ethernet carrier grade ring, an optical hub-based architecture and an optical time-slotted wavelength division multiplexing (WDM) ring. Our results indicate that the optical architectures are likely to decrease power consumption by up to 75% when compared with present day MANs. Moreover, by allowing the capacity of each wavelength to be dynamically shared among all nodes, a transparent slotted WDM yields throughput performance that is practically equivalent to that of today's electronic architectures, for equal capacity.", "keywords": "metropolitan area networks;optical networks;power consumption;performance evaluation", "title": "Cost, Power Consumption and Performance Evaluation of Metro Networks"} {"abstract": "Mass fortification of maize flour and corn meal with a single or multiple micronutrients is a public health intervention that aims to improve vitamin and mineral intake, micronutrient nutritional status, health, and development of the general population. Micronutrient malnutrition is unevenly distributed among population groups and is importantly determined by social factors, such as living conditions, socioeconomic position, gender, cultural norms, health systems, and the socioeconomic and political context in which people access food. Efforts trying to make fortified foods accessible to the population groups that most need them require acknowledgment of the role of these determinants. Using a perspective of social determinants of health, this article presents a conceptual framework to approach equity in access to fortified maize flour and corn meal, and provides nonexhaustive examples that illustrate the different levels included in the framework. Key monitoring areas and issues to consider in order to expand and guarantee a more equitable access to maize flour and corn meal are described.", "keywords": "fortified maize flour;social determinants of health;equity;accessibility;fortified corn meal", "title": "Equity in access to fortified maize flour and corn meal"} {"abstract": "This article presents a printed crescent-shaped monopole MIMO diversity antenna for wireless communications. The port-to-port isolation is increased by introducing an I-shaped conductor symmetrically between the two antenna elements and shaping the ground plane. Both the computed and experimental results confirm that the antenna possesses a wide impedance bandwidth of 54.5% across 1.62.8 GHz, with a reflection coefficient and mutual coupling better than ?10 and ?14 dB, respectively. By further validating the simulated and the measured radiation and MIMO characteristics including far-field, gain, envelope correlation and channel capacity loss, the results show that the antenna can offer effective MIMO/diversity operation to alleviate multipath environments. ", "keywords": "monopole antenna;port-to-port isolation;reflection coefficient;mutual coupling;mimo", "title": "Design of a printed MIMO/diversity monopole antenna for future generation handheld devices"} {"abstract": "In 1993, Goguen published a research note addressing the social issues in Requirements Engineering. He identified in the requirements process three major social groups: the client organization; the requirements team; and the development team. However, nowadays there is a lack of technological support that traces requirements to social issues on the requirements team or development team. From early published traceability metamodels to current requirements traceability literature, the client organization and the stakeholders are first-class citizens, but the software engineers and the interactions between these groups are not. In this paper we present a partially formalized RichPicture traceability model to fill this gap. ITrace is a flexible model to weave together the social network graph, the information sources graph, the social interactions graph, and the Requirements Engineering artifacts evolution graph. We empirically developed our traceability model tracking a Transparency catalogue evolution. We also compare our model structure to Contribution Structures.", "keywords": "rich picture;software evolution;requirements traceability;graph-based traceability;social issues", "title": "a rich traceability model for social interactions"} {"abstract": "The utility of the auditory evoked potential (AEP) is under investigation as a feedback signal for the automatic closed-loop control of general anaesthesia using neural networks and fuzzy logic. The AEP is a signal derived from the electroencephalogram (EEG) in response to auditory stimulation, which may be useful as an index of the depth of anaesthesia. A simple back-propagation neural network can learn the AEP and provides a satisfactory input to a fuzzy logic infusion controller for the administration of anaesthetic drugs, but the problem remains that of reliable signal acquisition.", "keywords": "anaesthesia;closed-loop control;auditory evoked potential;neural network;fuzzy controller", "title": "Neuro-fuzzy closed-loop control of depth of anaesthesia"} {"abstract": "This paper deals with computability of interactive computations. It aims at the characterization and analysis of a general concept of interactive computation as a basis for the extension and generalization of the notion of computability to interactive computations. We extend the notion of computability to interactive computations. Instead of partial functions on natural numbers or on finite strings we work with functions and relations on infinite input and output streams. As part of the computability of such functions and relations on streams we treat the following aspects of interactive computations: causality between input and output streams realizability of single output streams for given input streams the role of non-realizable output relating non-realizable behaviors to state machines the concept of interactive computation and computability for interactive systems the role of time in computability.", "keywords": "computability;interaction;realizability", "title": "Computability and realizability for interactive computations"} {"abstract": "The probability density function (pdf) is discussed of the differential phase difference (DPD) in the radio frequency (RF) pulse-burst perturbed by Gaussian noise at the coherent receiver. Statistical properties of the DPD are of importance for error estimation in coherent systems such as remote passive wireless surface acoustic wave (SAW) sensing with multiple differential phase measurement. The rigorous probability density of the DPD is derived and its particular functions, all having no closed forms, are given for different signal-to-noise ratios (SNRs) in the RF pulses. Employing the von Mises/Tikhonov distribution, an efficient approximation is proposed via the modified Bessel functions of the first kind and zeroth order. Engineering features and small errors of the approximation are demonstrated. Applications are given for the phase difference drift rate and error probability for the drift rate to exceed a threshold.", "keywords": "differential phase difference;probability density;passive saw sensing;drift rate;error probability", "title": "Probability density of the differential phase difference in applications to passive wireless surface acoustic wave sensing"} {"abstract": "In a recent article Averbakh and Berman present an O(p(3)root(log log p)(log p) + n(2)) serial algorithm to solve the distance constrained p-center location problem with mutual communication on a tree network with n nodes. In this note we suggest two simple modifications leading to the improved (subquadratic in n), O(p(3)root(log log p)(log p) + p(n + p)log(2)(n + p)) complexity bound. We also present a new O(p(2)n log n log(n + p)) algorithm for the discrete version of this problem. ", "keywords": "multifacility location;p-center;tree networks", "title": "An improved algorithm for the distance constrained p-center location problem with mutual communication on tree networks"} {"abstract": "The Combined Array for Millimeter-wave Astronomy (CARMA) data reduction pipeline (CADRE) has been developed to give investigators a first look at a fully reduced set of their data. It runs automatically on all data produced by the telescope as they arrive in the CARMA data archive. CADRE is written in Python and uses Python wrappers for MIRIAD subroutines for direct access to the data. It goes through the typical reduction procedures for radio telescope array data and produces a set of continuum and spectral line maps in both MIRIAD and FITS format. CADRE has been in production for nearly two years and this paper presents the current capabilities and planned development.", "keywords": "data reduction pipeline", "title": "CADRE: The CArma Data REduction pipeline"} {"abstract": "Low noise amplifier (LNA) in many wireless and wireline communication systems must have low noise, sufficient gain and high linearity performance. This paper presents a novel IP3 boosting technique using Feedforward Distortion Cancellation (FDC) method, that is, use an additional path to generate distortion and then cancel with the original LNA's distortion at its output. Through this technique, the IIP3 of LNA can be boosted from about 0 dBm, which is reported in most public literature to date, to +21 dBm, which is firstly reported to this day, with negligible noise degradation.", "keywords": "ip3 boosting;cmos lna;noise;wlan", "title": "A novel IP3 boosting technique using feedforward distortion cancellation method for 5 GHz CMOS LNA"} {"abstract": "In this paper, a new lossless data compression method that is based on digram coding is introduced This data compression method uses semi-static dictionaries All of the used characters and most frequently used two character blocks (digrams) in the source are found and inserted into a dictionary in the first pass, compression is performed in the second pass This two-pass structure is repeated several times and in every iteration particular number of elements is inserted in the dictionary until the dictionary is filled This algorithm (ISSDC Iterative Semi-Static Digram Coding) also includes some mechanisms that can decide about total number of iterations and dictionary size whenever these values are not given by the user Our experiments show that ISSDC is better than LZW/GIF and BPE in compression ratio It is worse than DEFLATE in compression of text and binary data, but better than PNG (which uses DEFLATE compression) in lossless compression of simple images", "keywords": "lossless data compression;dictionary-based compression;semi-static dictionary;digram coding", "title": "ISSDC DIGRAM CODING BASED LOSSLESS DATA COMPRESSION ALGORITHM"} {"abstract": "This paper describes the objectives and contents of a cost-effective curriculum for embedded system course in our university. The main aim of the course is learning to solve a real problem by developing a real system. The students learn skills to adapt this system to new scenarios. The system consists of a wireless module, a microcontroller and an application for smartphones to control lights by wireless communications. In order to motivate students, JavaME and Android, the important technologies for today's smartphones were chosen. As a result of the course, following the Bologna guidelines, the students worked cooperatively, like in a real scenario. Each group member worked in a complementary manner by analyzing the division of tasks for each student. We followed project-based methodology has provided an incremental learning to students. According to results, students responded to the course survey that they are knowledgeable on how embedded systems work after taking the course. ", "keywords": "engineering education;embedded system;project-based learning;educational innovation;education technology;radio communication", "title": "An embedded system course using JavaME and android"} {"abstract": "This paper mainly deals with the existence and multiplicity of positive solutions for the focal problem involving both the p-Laplacian and the first order derivative: {((u')(p-1))' + f (t, u, u') = 0, t is an element of(0, 1), u(0) = u'(1) = 0. The main tool in the proofs is the fixed point index theory, based on a priori estimates achieved by using Jensen's inequality and a new inequality. Finally the main results are applied to establish the existence of positive symmetric solutions to the Dirichlet problem: {(|u'|(p-2)u')' + f (u, u') = 0, t is an element of (-1, 0) boolean OR (0, 1), u(-1) = u(1) = 0. ", "keywords": "p-laplacian equation;positive solution;focal problem;fixed point index;dirichlet problem;jensen's inequality", "title": "Positive solutions of a focal problem for one-dimensional p-Laplacian equations"} {"abstract": "A new one-dimensional gas-kinetic BGK scheme for gaswater flow is developed with the inclusion of the stiffened equation of state for water. The mixture model is considered, where the gas and water inside a computational cell achieve the equilibrium state, with equal pressure, velocity and temperature, within a time step. The splitting method is adopted to calculate the flux of each component at a cell interface individually. The preliminary application of the present newly developed method in different types of shock tube problems, including gasgas shock tube and gaswater shock tube problems, validates its good performance for gaswater flow.", "keywords": "gas-kinetic scheme;gaswater two-phase flow;mixture model", "title": "A gas-kinetic BGK scheme for gaswater flow"} {"abstract": "The paper presents quasi-static plane strain FE-simulations of strain localization in reinforced concrete beams without stirrups. The material was modeled with two different isotropic continuum crack models: an elasto-plastic and a damage one. In case of elasto-plasticity, linear Drucker-Prager criterion with a non-associated flow rule was defined in the compressive regime and a Rankine criterion with an associated flow rule was adopted in the tensile regime. In the case of a damage model, the degradation of the material due to micro-cracking was described with a single scalar damage parameter. To ensure the mesh-independence and to capture size effects, both criteria were enhanced in a softening regime by non-local terms. Thus, a characteristic length of micro-structure was included. The effect of a characteristic length, reinforcement ratio, bond-slip stiffness, fracture energy and beam size on strain localization was investigated. The numerical results with reinforced concrete beams were quantitatively compared with corresponding laboratory tests by Walraven (1978).", "keywords": "bond-slip;concrete;characteristic length;damage mechanics;elasto-plasticity;nonlocal theory;reinforcement;strain localization", "title": "Simulations of spacing of localized zones in reinforced concrete beams using elasto-plasticity and damage mechanics with non-local softening"} {"abstract": "A \"denial-of-service\" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. SYN flood attack is one of the most common types of DoS. In this lab, we model and simulate a real world network, and we launch a SYN attack against our web server. Through this, we study the nature of the attack and investigate the effectiveness of several approaches in defending against SYN attack. This lab will allow students to anatomize the SYN flooding attack and defense in the lab environment and obtain data and parameters of DoS resilience capability.", "keywords": "curriculum;security;laboratory", "title": "a lab implementation of syn flood attack and defense"} {"abstract": "We study the problem of allocating students to projects, where both students and lecturers have preferences over projects, and both projects and lecturers have capacities. In this context we seek a stable matching of students to projects, which respects these preference and capacity constraints. Here, the stability definition generalises the corresponding notion in the context of the classical Hospitals/Residents problem. We show that stable matchings can have different sizes, which motivates max-spa-p, the problem of finding maximum cardinality stable matching. We prove that max-spa-p is NP-hard and not approximable within ? , for some ?>1 ? > 1 , unless P=NP P = NP . On the other hand, we give an approximation algorithm with a performance guarantee of 2 for max-spa-p.", "keywords": "matching problem;stable matching;np-hardness;approximation hardness;approximation algorithm", "title": "Student-Project Allocation with preferences over Projects"} {"abstract": "One of the next steps in the current rapid evolution of wireless technologies will be that operators will need to enable users to use communication' services independently of access technologies, so they will have to support seamless handovers in heterogeneous networks. In this paper we present a novel SIP based procedure for congestion aware handover in heterogeneous networks. With newly defined SIP messages the handover decision is based not only on signal strength, but also on target network status. Using the SIP protocol, this approach is completely independent of access technology and the underlying protocol used, and thus can be easily deployed in operator's environments. The proposed procedure was evaluated with a purpose-built simulation model. The results show that, using the proposed procedure, the QoE of VoIP users can be maintained during session by eliminating handovers to target network, which could cause degradation of service in the target network and by triggering handovers to another network, when there could come to degradation of service in the current network. ", "keywords": "seamless handover;sip;heterogeneous networks;performance evaluation;congestion awareness", "title": "A novel SIP based procedure for congestion aware handover in heterogeneous networks"} {"abstract": "This paper describes a method for localizing objects in an actual living environment. We have developed this method by using a complementary combination of 1) received signal strength indicators (RSSIs) and vibration data acquired from active RFID tags, and 2) human behavior detected from various types of sensors embedded in the environment. Regarding the former, we use a pattern recognition method to select a feature appeared in SSIs received by several radio frequency (RF) readers at different places and to classify them into a particular location. In our work, we regard the estimated location as the most probable location where the object is placed. As for the latter, we use the detected human behavior to support the estimation based on the analysis of RSSIs. Experiment results showed that the proposed method improved the estimation performance from about 50 to 95% compared with using only RSSIs to localize objects. Moreover, the results also suggested that we can estimate object location indoors without sensors for detecting human position. This indoor object localization method can contribute for constructing an indoor object management system that improves living comfort.", "keywords": "rssi;environment-embedded sensor;indoor localization;active rfid", "title": "use of active rfid and environment-embedded sensors for indoor object location estimation"} {"abstract": "Compared with noisy chaotic neural networks (NCNNs), hysteretic noisy chaotic neural networks (HNCNNs) are more likely to exhibit better optimization performance at higher noise levels, but behave worse at lower noise levels. In order to improve the optimization performance of HNCNNs, this paper presents a novel noise-tuning-based hysteretic noisy chaotic neural network (NHNCNN). Using a noise tuning factor to modulate the level of stochastic noises, the proposed NHNCNN not only balances stochastic wandering and chaotic searching, but also exhibits stronger hysteretic dynamics, thereby improving the optimization performance at both lower and higher noise levels. The aim of the broadcast scheduling problem (BSP) in wireless multihop networks (WMNs) is to design an optimal time-division multiple-access frame structure with minimal frame length and maximal channel utilization. A gradual NHNCNN (G-NHNCNN), which combines the NHNCNN with the gradual expansion scheme, is applied to solve BSP in WMNs to demonstrate the performance of the NHNCNN. Simulation results show that the proposed NHNCNN has a larger probability of finding better solutions compared to both the NCNN and the HNCNN regardless of whether noise amplitudes are lower or higher.", "keywords": "broadcast scheduling problem;hysteresis;noise tuning;noisy chaotic neural network;wireless multihop networks", "title": "Noise-Tuning-Based Hysteretic Noisy Chaotic Neural Network for Broadcast Scheduling Problem in Wireless Multihop Networks"} {"abstract": "We investigate the performance of different protocol stacks under various application scenarios. Our method of choice is a full-fledged simulation in QualNet, testing the complete protocol stack over fairly large-scale networks. We find that the relative ranking of protocols strongly depends on the network scenario, the session load, the mobility level, and the choice of protocol parameters. We show that the Parametric Probabilistic Protocols, which we generalize from their original definition, can outperform standard routing protocols, such as AODV or Gossiping or Shortest-Path, in a variety of realistic scenarios.", "keywords": "wireless networks;probabilistic routing;scenarios;mobility", "title": "probabilistic multi-path vs. deterministic single-path protocols for dynamic ad-hoc network scenarios"} {"abstract": "It is possible to interpret 2D and 3D seismic data using NIH Image, a free medical imaging software product developed by the US National Institutes of Health. Using Image, or one of several developing spin-offs, the basic flow of seismic interpretation can be accomplished. It is also capable of some advanced methods such as volume visualization, amplitude calibration to well control, cube displays and reservoir area/volume estimates. All figures in this paper were generated using this free product. At the least, Image is a marvelous data viewer which compliments workstation class systems. However, for many users, it may be sufficient for the entire interpretation process.", "keywords": "seismic interpretation;software;internet", "title": "Geophysics and NIH Image1"} {"abstract": "Since privatisation of the UK Electricity Supply Industry, the merit-order for the dispatch of generating plant has undergone radical changes. In Scotland, where the electricity companies have a broad-based generation portfolio, base load is now usually carried by a combination of nuclear and gas-fired generation. Hydroelectricity has now moved to a much less well-defined position for generation, partly because of its seasonal variability. However, the major consequence of this arrangement has been that coal-fired generation has slipped in the merit-order from the days of Nationalisation and now occupies a mid-merit position. This changed role for coal has imposed new requirements on engineers involved in the operation of, and generation planning for, coal-fired power stations.", "keywords": "coal-fired generation;base load;mid-merit position", "title": "Coal-fired generation in a privatised electricity supply industry"} {"abstract": "We present provably efficient parallel algorithms for sweep scheduling, which is a commonly used technique in radiation transport problems, and involves inverting an operator by iteratively sweeping across a mesh from multiple directions. Each sweep involves solving the operator locally at each cell. However. each direction induces a partial order in which this computation can proceed. On a distributed computing system, the goal is to schedule the computation, so that the length of the schedule is minimized. Due to efficiency and coupling considerations, we have an additional constraint, namely. a mesh cell must be processed on the same processor along each direction. Problems similar in nature to sweep scheduling arise in several other applications, and here we formulate a combinatorial generalization of this problem that captures the sweep scheduling constraints and call it the generalized sweep scheduling problem. Several heuristics have been proposed for this problem; see [S. Pautz, An algorithm for parallel S-n, sweeps on unstructured meshes, Nucl. Sci. Eng. 140 (2002) 111-136; S. Plimpton, B. Hendrickson, S. Burns, W. McLendon, Parallel algorithms for radiation transport on unstructured grids, Super Comput. (2001)] and the references therein; but none of these have provable worst case performance guarantees. Here we present a simple, almost linear time randomized algorithm for the generalized sweep scheduling problem that (provably) gives a schedule of length at most O(log(2)n) times the optimal schedule for instances with n cells, when the communication cost is not considered, and a slight variant, which coupled with a Much more careful analysis, gives a schedule of (expected) length 0(log m log log log m) times the optimal schedule for in processors. These are the first such provable guarantees for this problem. The algorithm can be extended with an additional multiplicative factor in the case when we have inter-processor communication latency, in the models of Rayward-Smth [UET scheduling with inter-processor communication delays, Discrete Appl. Math. 18 (1) (1987) 55-71] and Hwang et al. [Scheduling precedence graphs in systems with inter-processor communication times. SIAM J. Comput. 18(2) (1989) 244-257]. Our algorithms are extremely simple, and use no geometric information about the mesh; therefore, these techniques are likely to be applicable in more general settings. We also design a priority based list schedule using these ideas, with the same theoretical guarantee, but much better performance in practice; combining this algorithm with a simple block decomposition also lowers the overall communication cost significantly. Finally, we perform a detailed experimental analysis of our algorithm. Our results indicate that the algorithm compares favorably with the length of the schedule produced by other natural and efficient parallel algorithms proposed in the literature [S. Pautz, An Algorithm for parallel S-n sweeps on unstructured meshes, Nucl. Sci. Eng. 140 (2002) 111-136: S. Plimpton, B. Hendrickson, S. Burns, W. McLendon, Parallel algorithms for radiation transport on unstructured grids, Super Comput. (2001)]. ", "keywords": "parallel algorithms;minimum makespan scheduling;approximation algorithms;sweep scheduling on meshes", "title": "Provable algorithms for parallel generalized sweep scheduling"} {"abstract": "Rendering of virtual views in interactive streaming of compressed image-based scene representations requires random access to arbitrary parts of the reference image data. The degree of interframe dependencies exploited during encoding has an impact on the transmission and decoding time and, at the same time, delimits the (storage) rate-distortion (RD) tradeoff that can be achieved. In this work, we extend the classical RD optimization approach using hybrid video coding concepts to a tradeoff between the storage rate (R), distortion (D), transmission data rate (T), and decoding complexity (Q. We present a theoretical model for this RDTC space with a focus on the decoding complexity and, in addition, the impact of client side caching on the RDTC measures is considered and evaluated. Experimental results qualitatively match those predicted by our theoretical models and show that an adaptation of the encoding process to scenario specific parameters like computational power of the receiver and channel throughput can significantly reduce the user-perceived delay or required storage for RDTC optimized streams compared to RD optimized or independently encoded scene representations.", "keywords": "image-based rendering ;interactive streaming;rdtc optimization", "title": "RDTC optimized compression of image-based scene representations (Part I): Modeling and theoretical analysis"} {"abstract": "This paper assesses the forecasting performance of count data models applied to arts attendance. We estimate participation models for two artistic activities that differ in their degree of popularity museums and jazz concerts with data derived from the 2002 release of the Survey of Public Participation in the Arts for the United States. We estimate a finite mixture model a zero-inflated negative binomial model that allows us to distinguish between true non-attendants and goers and their respective behaviour regarding participation in the arts. We evaluate the predictive (in-sample) and forecasting (out-of-sample) accuracy of the estimated model using bootstrapping techniques to compute the Brier score. Overall, the results indicate the model performs well in terms of forecasting. Finally, we draw certain policy implications from the models forecasting capacity, thereby allowing the identification of target populations.", "keywords": "forecasting;count data;brier scores;bootstrapping;cultural participation", "title": "Forecasting accuracy of behavioural models for participation in the arts"} {"abstract": "Degeneration of the intervertebral disc may be initiated and supported by impairment of the nutrition processes of the disc cells. The effects of degenerative changes on cell nutrition are, however, only partially understood. In this work, a finite volume model was used to investigate the effect of endplate calcification, water loss, reduction of disc height and cyclic mechanical loading on the sustainability of the disc cell population. Oxygen, lactate and glucose diffusion, production and consumption were modelled with non-linear coupled partial differential equations. Oxygen and glucose consumption and lactate production were expressed as a function of local oxygen concentration, pH and cell density. The cell viability criteria were based on local glucose concentration and pH. Considering a disc with normal water content, cell death was initiated in the centre of the nucleus for oxygen, glucose, and lactate diffusivities in the cartilaginous endplate below 20% of the physiological values. The initial cell population could not be sustained even in the non-calcified endplates when a reduction of diffusion inside the disc due to water loss was modelled. Alterations in the disc shape such as height loss, which shortens the transport route between the nutrient sources and the cells, and cyclic mechanical loads, could enhance cell nutrition processes.", "keywords": "cell nutrition;cell metabolism;finite volume;cell viability;intervertebral disc", "title": "Effect of intervertebral disc degeneration on disc cell viability: a numerical investigation"} {"abstract": "The selection of recovery strategies is often based only on the types and circumstances of the failures. However, also changes in the euvironment such as fewer resources at node levels or degradation of quality-of-service Should be considered before allocating a new process/task to another host or before taking re-configuration decisions. In this paper we present why and how resource availability information should be considered for recovery strategies adaptation. Such resource aware run-time adaptation of recovery improves the availability and survivability of a system.", "keywords": "recovery strategy;fault tolerance;adaptation;resource monitoring;availability", "title": "RESOURCE AWARE RUN-TIME ADAPTATION SUPPORT FOR RECOVERY STRATEGIES"} {"abstract": "The main purpose of this paper is to investigate the optimal retailers replenishment decisions under two levels of trade credit policy within the economic production quantity (EPQ) framework. We assume that the supplier would offer the retailer a delay period and the retailer also adopts the trade credit policy to stimulate his/her customer demand to develop the retailers replenishment model under the replenishment rate is finite. Furthermore, we assume that the retailers trade credit period offered by supplier M is not shorter than the customers trade credit period offered by retailer N (M?N). Since the retailer cannot earn any interest in this situation, M