id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
-2R:PhF
A robust and efficient finite volume scheme for the discretization of diffusive flux on extremely skewed meshes in complex geometries
In this paper an improved finite volume scheme to discretize diffusive flux on a non-orthogonal mesh is proposed. This approach, based on an iterative technique initially suggested by Khosla [P.K. Khosla, S.G. Rubin, A diagonally dominant second-order accurate implicit scheme, Computers and Fluids 2 (1974) 207209] and known as deferred correction, has been intensively utilized by Muzaferija [S. Muzaferija, Adaptative finite volume method for flow prediction using unstructured meshes and multigrid approach, Ph.D. Thesis, Imperial College, 1994] and later Fergizer and Peric [J.H. Fergizer, M. Peric, Computational Methods for Fluid Dynamics, Springer, 2002] to deal with the non-orthogonality of the control volumes. Using a more suitable decomposition of the normal gradient, our scheme gives accurate solutions in geometries where the basic idea of Muzaferija fails. First the performances of both schemes are compared for a Poisson problem solved in quadrangular domains where control volumes are increasingly skewed in order to test their robustness and efficiency. It is shown that convergence properties and the accuracy order of the solution are not degraded even on extremely skewed mesh. Next, the very stable behavior of the method is successfully demonstrated on a randomly distorted grid as well as on an anisotropically distorted one. Finally we compare the solution obtained for quadrilateral control volumes to the ones obtained with a finite element code and with an unstructured version of our finite volume code for triangular control volumes. No differences can be observed between the different solutions, which demonstrates the effectiveness of our approach.
[ "finite volume", "skewed meshes", "deferred correction", "distorted grid", "diffusive flux discretization", "poisson equation" ]
[ "P", "P", "P", "P", "R", "M" ]
-CspH6E
Evaluation of Trend Localization with Multi-Variate Visualizations
Multi-valued data sets are increasingly common, with the number of dimensions growing. A number of multi-variate visualization techniques have been presented to display such data. However, evaluating the utility of such techniques for general data sets remains difficult. Thus most techniques are studied on only one data set. Another criticism that could be levied against previous evaluations of multi-variate visualizations is that the task doesn't require the presence of multiple variables. At the same time, the taxonomy of tasks that users may perform visually is extensive. We designed a task, trend localization, that required comparison of multiple data values in a multi-variate visualization. We then conducted a user study with this task, evaluating five multi-variate visualization techniques from the literature (Brush Strokes, Data-Driven Spots, Oriented Slivers, Color Blending, Dimensional Stacking) and juxtaposed grayscale maps. We report the results and discuss the implications for both the techniques and the task.
[ "multi-variate visualization", "user study", "visual task design", "visual analytics" ]
[ "P", "P", "R", "M" ]
1YHjK4q
An efficient reconfigurable multiplier architecture for Galois field GF(2m)
This paper describes an efficient architecture of a reconfigurable bit-serial polynomial basis multiplier for Galois field GF(2m), where 1<m?M. The value m, of the irreducible polynomial degree, can be changed and so, can be configured and programmed. The value of M determines the maximum size that the multiplier can support. The advantages of the proposed architecture are (i) the high order of flexibility, which allows an easy configuration for different field sizes, and (ii) the low hardware complexity, which results in small area. By using the gated clock technique, significant reduction of the total multiplier power consumption is achieved.
[ "galois field", "bit-serial", "irreducible polynomial", "polynomial multiplication", "all-one polynomial", "linear feedback shift register", "low power", "cryptography", "elliptic curves" ]
[ "P", "P", "P", "M", "M", "U", "R", "U", "U" ]
2R7oTAk
Experiences in building a Grid-based platform to serve Earth observation training activities
Earth observation data processing and storing can be done nowadays only using distributed systems. Experiments dealing with a large amount of data are possible within the timeframe of a lesson and can give trainees the freedom to innovate. Following these trends and ideas, we have built a proof-of-the-concept platform, named GiSHEO, for Earth observation educational tasks. It uses Grid computing technologies to analyze and store remote sensing data, and combines them with eLearning facilities. This paper provides an overview of the GiSHEO's platform architecture and of its technical and innovative solutions. (C) 2011 Elsevier B.V. All rights reserved.
[ "distributed systems", "image processing software", "earth and atmospheric sciences", "computer uses in education" ]
[ "P", "M", "M", "R" ]
b8c2PMf
Field D* path-finding on weighted triangulated and tetrahedral meshes
Classic shortest path algorithms operate on graphs, which are suitable for problems that can be represented by weighted nodes or edges. Finding a shortest path through a set of weighted regions is more difficult and only approximate solutions tend to scale well. The Field D* algorithm efficiently calculates an approximate, interpolated shortest path through a set of weighted regions and was designed for navigating robots through terrains with varying characteristics. Field D* operates on unit grid or quad-tree data structures, which require high resolutions to accurately model the boundaries of irregular world structures. In this paper, we extend the Field D* cost functions to 2D triangulations and 3D tetrahedral meshes: structures which model polygonal world structures more accurately. Since robots typically have limited resources available for computation and storage, we pay particular attention to computation and storage overheads when detailing our extensions. We begin by providing analytic solutions to the minimum of each cost function for 2D triangles and 3D tetrahedra. Our triangle implementation provides a 50 % improvement in performance over an existing triangle implementation. While our 3D extension to tetrahedra is the first full analytic extension of Field D* to 3D, previous work only provided an approximate minimization for a single cost function on a 3D cube with unit lengths. Each cost function is expressed in terms of a general function whose characteristics can be exploited to reduce the calculations required to find a minimum. These characteristics can also be exploited to cache the majority of cost functions, producing a speedup of up to 28 % in the 3D tetrahedral case. We demonstrate that, in environments composed of non-grid aligned data, Multi-resolution quad-tree Field D* requires an order of magnitude more faces and between 15 and 20 times more node expansions, to produce a path of similar cost to one produced by a triangle implementation of Field D* on a lower resolution triangulation. We provide examples of 3D pathing through models of complex topology, including pathing through anatomical structures extracted from a medical data set. To summarise, this paper details a robust and efficient extension of Field D* pathing to data sets represented by weighted triangles and tetrahedra, and also provides empirical data which demonstrates the reduction in storage and computation costs that accrue when one chooses such a representation over the more commonly used quad-tree and grid-based alternatives.
[ "representations", "artificial intelligence", "problem solving", "control methods and search", "graph and tree search strategies", "vision and scene understanding", "data structures and transforms", "perceptual reasoning" ]
[ "P", "U", "M", "M", "M", "M", "M", "U" ]
1Vp&CtA
Correlation analysis of signal flow in a model prefrontal cortical circuit representing multiple target locations
In spite of the recent cross-correlation analyses of the monkey prefrontal cortical neurons performing spatial working memory tasks (J. Neurosci. 21 (2001) 3646; Cerebr. Cortex 10 (2000) 535), it is uncertain as to how much degree the correlation data reflect the circuitry of highly recurrent networks. We did a computer simulation of a model cortical circuit, whose connectivity is fully known, and analyzed the cross-correlations of the spikes of pairs of neurons in the model. The result shows that cross-correlation histograms (CCHs) of pyramidalpyramidal pairs tend to mask higher-order synaptic interactions, yielding CCHs with central peaks or almost flat CCHs. However, CCHs of pyramidalinterneuron pairs show displaced positive and/or negative peaks, depending on the connectivity of these neurons.
[ "circuit", "working memory", "delay-period activity", "prefrontal cortex", "intracortical inhibition" ]
[ "P", "P", "U", "R", "U" ]
4UW1QN8
Slow-dynamic finite element simulation of manufacturing processes
Explicit time integration and dynamic finite element formulations are increasingly being used to analyze nonlinear static problems in solid and structural mechanics. This is particularly true in the simulation of sheer metal manufacturing processes. Employment of slow-dynamic, quasi-static techniques in static problems can introduce undesirable dynamic effects that originate from the inertia forces of the governing equations. In this paper, techniques and guidelines are presented, analyzed and demonstrated, which enable the minimization of the undesirable dynamic effects. The effect of the duration and functional form of the time histories of the loads and boundary conditions is quantified by the analysis of a linear spring mass oscillator. The resulting guidelines and techniques are successfully demonstrated in the nonlinear finite element simulation of a sheet metal deep drawing operation. The accuracy of the quasi-static, slow-dynamic finite element analyses is evaluated by comparison to results of laboratory experiments and purely static analyses. Various measures that quantify the dynamic effects, including kinetic energy, also are discussed. (C) 1997 Elsevier Science Ltd.
[ "dynamic", "finite element", "explicit", "nonlinear", "quasi-static", "sheet metal", "implicit" ]
[ "P", "P", "P", "P", "P", "P", "U" ]
52bss6p
Improving TCP performance in integrated wireless communications networks
Many analytical and simulation-based studies of TCP performance in wireless environments assume an error-free and congestion-free reverse channel that has the same capacity as the forward channel. Such an assumption does not hold in many real-world scenarios, particularly in the hybrid networks consisting of various wireless LAN (WLAN) and cellular technologies. In this paper, we first study, through extensive simulations, the performance characteristics of four representative TCP schemes, namely TCP New Reno, SACK, Veno, and Westwood, under the network conditions of asymmetric end-to-end link capacities, correlated wireless errors, and link congestion in both forward and reverse directions. We then propose a new TCP scheme, called TCP New Jersey, which is capable of distinguishing wireless packet losses from congestion packet losses, and reacting accordingly. TCP New Jersey consists of two key components, the timestamp-based available bandwidth estimation (TABE) algorithm and the congestion warning (CW) router configuration. TABE is a TCP-sender-side algorithm that continuously estimates the bandwidth available to the connection and guides the sender to adjust its transmission rate when the network becomes congested. TABE is immune to the ACK drops as well as ACK compression. CW is a configuration of network routers such that routers alert end stations by marking all packets when there is a sign of an incipient congestion. The marking of packets by the CW-configured routers helps the sender of the TCP connection to effectively differentiate packet losses caused by network congestion from those caused by wireless link errors. Our simulation results show that TCP New Jersey is able to accurately estimate the available bandwidth of the bottleneck link of an end-to-end path; and the TABE estimator is immune to link asymmetry, bi-directional congestion, and the relative position of the bottleneck link in the multi-hop end-to-end path. The proactive congestion avoidance control mechanism proposed in our scheme minimizes the network congestion, reduces the network volatility, and stabilizes the queue lengths while achieving more throughput than other TCP schemes.
[ "bandwidth estimation", "wireless tcp", "explicit congestion notification", "congestion control", "loss differentiation" ]
[ "P", "R", "M", "R", "R" ]
-wmHjG3
A Hopfield neural network based task mapping method
With a prior knowledge of a program, static mapping aims to identify an optimal clustering strategy that can produce the best performance. In this paper we present a static method that uses Hopfield neural network to cluster the tasks of a parallel program for a given system. This method takes into account both load balancing and communication minimization. The method has been tested on a distributed shared memory system against other three clustering methods. Four programs, SOR, N-body, Gaussian Elimination and VQ, are used in the test. The result shows that our method is superior to the other three. (C) 1999 Elsevier Science B.V. All rights reserved.
[ "task mapping", "optimization", "distributed shared memory", "hopfield network" ]
[ "P", "P", "P", "R" ]
HiGAq&9
Investigating the evolution of code smells in object-oriented systems
Software design problems are known and perceived under many different terms, such as code smells, flaws, non-compliance to design principles, violation of heuristics, excessive metric values and anti-patterns, signifying the importance of handling them in the construction and maintenance of software. Once a design problem is identified, it can be removed by applying an appropriate refactoring, improving in most cases several aspects of quality such as maintainability, comprehensibility and reusability. This paper, taking advantage of recent advances and tools in the identification of non-trivial code smells, explores the presence and evolution of such problems by analyzing past versions of code. Several interesting questions can be investigated such as whether the number of problems increases with the passage of software generations, whether problems vanish by time or only by targeted human intervention, whether code smells occur in the course of evolution of a module or exist right from the beginning and whether refactorings targeting at smell removal are frequent. In contrast to previous studies that investigate the application of refactorings in the history of a software project, we attempt to analyze the evolution from the point of view of the problems themselves. To this end, we classify smell evolution patterns distinguishing deliberate maintenance activities from the removal of design problems as a side effect of software evolution. Results are discussed for two open-source systems and four code smells.
[ "evolution", "code smell", "refactoring", "software repositories", "software history" ]
[ "P", "P", "P", "M", "R" ]
4E9muzC
issues in parallelizing multigrid-based substrate model extraction and analysis
Accurate modeling of coupling effects via the substrate is an increasingly important concern in the design of mixed-signal systems such as communication, biomedical and analog signal processing circuits. Fast-switching digital blocks inject noise into the common substrate hindering the performance of high-precision sensible analog circuitry. Miniaturization effects on ICs complexity inevitably make the accuracy requirements for substrate coupling simulation increase. Due in part to the global nature of such couplings, model extraction and analysis is a computation-intensive task requiring the availability of fast and accurate substrate model extraction and analysis tools. One way to deal with this problem is to take further advantage of available computational technologies and distributed computing emerges as an interesting solution.In this paper we discuss several issues related to the parallelization of a Multigrid-based substrate model extraction and analysis tool. This tool is used as a proxy for generic computations on a 3D discretized volume. The results presented indicate potential avenues for successfully exploiting parallelism as well as pitfalls to avoid in such a quest.
[ "substrate coupling", "multigrid", "grid computing" ]
[ "P", "U", "M" ]
KPzCxuq
Hypoxia-induced phrenic long-term facilitation: emergent properties
As in other neural systems, plasticity is a hallmark of the neural system controlling breathing. One spinal mechanism of respiratory plasticity is phrenic long-term facilitation (pLTF) following acute intermittent hypoxia. Although cellular mechanisms giving rise to pLTF occur within the phrenic motor nucleus, different signaling cascades elicit pLTF under different conditions. These cascades, referred to as Q and S pathways to phrenic motor facilitation (pMF), interact via cross-talk inhibition. Whereas the Q pathway dominates pLTF after mild to moderate hypoxic episodes, the S pathway dominates after severe hypoxic episodes. The biological significance of multiple pathways to pMF is unknown. This review will discuss the possibility that interactions between pathways confer emergent properties to pLTF, including pattern sensitivity and metaplasticity. Understanding these mechanisms and their interactions may enable us to optimize intermittent hypoxia-induced plasticity as a treatment for patients that suffer from ventilatory impairment or other motor deficits.
[ "plasticity", "intermittent hypoxia", "pattern sensitivity", "metaplasticity", "motor neuron", "phrenic nerve" ]
[ "P", "P", "P", "P", "M", "M" ]
-bjxpRZ
Bivariate Mellin convolution operators: Quantitative approximation theorems
In this paper we study some qualitative and quantitative versions of the Voronovskaja approximation formulae for a class of bivariate Mellin convolution operators of type (T(w)f)(x, y) = integral(R+2) K(w)(tx(-1), vy(-1))f(t, v) dtdv/tv. Moreover we apply the general theory to some particular cases leading to various asymptotic formulae and involving various differential operators. (C) 2010 Elsevier Ltd. All rights reserved.
[ "mellin operators", "moments", "k-functional", "voronovskaja formula" ]
[ "R", "U", "U", "R" ]
35snhqa
E-government evolution in EU local governments: a comparative perspective
Purpose - The purpose of this paper is to describe an empirical study of the advances and trends of e-government in transparency, openness and hence accountability in European Union (EU) local governments to determine the extent to which the internet promotes the convergence towards more transparent and accountable government. The paper also tests the extent to which different factors related to the implementation of information and communication technologies (ICTs), the number of inhabitants and the type of public administration style have influenced e-government developments in the cities studied. Design/methodology/approach - A comprehensive content analysis of 75 local government web sites was conducted using a 73-item evaluation questionnaire. The evaluations were performed in 2004 and 2007 and 15 EU countries were covered (five per country). To analyse the evolution of e-government, several techniques were used: tests of difference of means, multidimensional scaling and cluster analysis. The contribution of the different contextual factors to the development of government web sites was tested with OLS regression analysis. Findings - The results show noticeable progress in the application of ICTs and increasing EU local government concern for bringing government closer to citizens and for giving an image of modernity and responsiveness, although few web sites show clear signs of real openness to encouraging citizen dialogue. The evolution of the e-government initiatives analysed shows that, at present, they are still overlapped with the public administration style of each country as an extension of traditional front offices with potential benefits in speed and accessibility. Originality/value - Although a growing number of e-government studies are appearing, previous research has not analysed the evolution of EU local governments from a comparative perspective.
[ "local government", "european union", "public administration style", "econometrics" ]
[ "P", "P", "P", "U" ]
4ibbW-g
Soccer video processing for the detection of advertisement billboards
Billboards are placed on the sides of a soccer field for advertisement during match telecast. Unlike regular commercials, which are introduced during a break, on-field billboards appear on the TV screen at uncertain time instances, in different sizes, and also for different durations. Automated processing of soccer telecasts for detection and analysis of such billboards can provide important information on the effectiveness of this mode of advertising. We propose a method in which shot boundaries are first identified and the type of each shot is determined. Frames within each shot are then segmented to locate possible regions of interests (ROIs) locations in a frame where billboards are potentially present. Finally, we use a combination of local and global features for detecting individual billboards by matching with a set of given templates.
[ "advertisement billboard", "soccer telecast", "region of interest", "template matching" ]
[ "P", "P", "P", "R" ]
4-eHDsA
Multi-component image segmentation in homogeneous regions based on description length minimization: Application to speckle, Poisson and Bernoulli noise
In this article, a minimum description length (MDL) criterion adapted to independent multi-component image segmentation into homogeneous regions is proposed. This approach, based on a deformable polygonal grid, allows us to segment noisy multi-component images perturbed with spatially independent speckle, Poisson or Bernoulli noise. The advantages of using such a multi-component approach rather than a mono-component one is demonstrated on synthetic and real images. This segmentation method is also applicable to multi-component images whose components do not follow the same noise statistics or have not been previously registered.
[ "multi-component images", "image segmentation", "minimum description length principle", "polygonal active contours", "image registration" ]
[ "P", "P", "M", "M", "M" ]
-JYGLff
Language dominance in interpersonal deception in computer-mediated communication ?
Dominance is not only a complicated social phenomenon that involves interpersonal dynamics, but also an effective strategy used in various applications such as deception detection, negotiation, and online community. The extensive literature on dominance has primarily focused on the personality traits and socio-biological influence, as well as various nonverbal and paralinguistic behaviors associated with dominance. Nonetheless, language dominance manifested through dynamically acquired linguistic capability and strategies has not been fully investigated. The exploration of language dominance in the context of deception is even rarer. With the increasing use of computer-mediated communication (CMC) in all aspects of modern life, language dominance in CMC has emerged as an important issue. This study examines language dominance in the context of deception via CMC. The experimental results show that deceivers: (1) demonstrate a different trend of language dominance from truthtellers over time; (2) manipulate the level of language dominance by initiating communication with low dominance and gradually increasing the level over the course of interaction, and (3) display higher levels of dominance in terms of some linguistic behaviors than truthtellers. They suggest that in CMC, deceivers not only adjust the level of language dominance more frequently, but also change it more remarkably than truthtellers.
[ "dominance", "interpersonal deception", "computer-mediated communication", "linguistic behavior" ]
[ "P", "P", "P", "P" ]
1MVMrSg
Benchmarking short sequence mapping tools
The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison.
[ "benchmark", "short sequence mapping", "next-generation sequencing", "sequence analysis" ]
[ "P", "P", "P", "M" ]
-eLHrnY
An integrated model for the latency and steady-state throughput of TCP connections ?
Most TCP connections in todays Internet transfer data on the order of only a few kilobytes. Such TCP transfers are very short and spend most of their time in the slow start phase. Thus the underlying assumptions made by steady-state models cease to hold, making them unsuitable for modeling finite flows. In this paper, we propose an accurate model for estimating the transfer times of TCP flows of arbitrary size. Our model gives a more accurate estimation of the transfer times than those predicted by Cardwell et al. [Proceedings of the IEEE INFOCOM, Tel Aviv, Israel, March 2000, pp. 17421751], which extends the steady-state analysis of Padhye et al. [IEEE/ACM Trans. Networking 8 (2) (2000) 133] to model finite flows. The main features of our work are the modeling of timeouts and slow start phases which occur anywhere during the transfer and a more accurate model for the evolution of the cwnd in the slow start phase. Additionally, the proposed model can also model the steady-state throughput of TCP connections. The model is verified using web based measurements of real life TCP connections. We also introduce an empirical model which allows a better feel of TCP latency and the nature of its dependence on loss probabilities and window limitation. Finally, the paper investigates the effect on window limitation and packet size on TCP latency.
[ "latency", "steady-state throughput", "tcp connections", "internet" ]
[ "P", "P", "P", "P" ]
-dKdeSw
Causality of frontal and occipital alpha activity revealed by directed coherence
Recently there has been increased attention to the causality among biomedical signals. The causality between brain structures involved in the generation of alpha activity is examined based on EEG signals acquired simultaneously in the frontal and occipital regions of the scalp. The concept of directed coherence (DC) is introduced as a means of resolving two-signal observations into the constituent components of original signals, the interaction between signals and the influence of one signal source on the other, through autoregressive modeling. The technique was applied to EEG recorded from 11 normal subjects with eyes closed. Through an analysis of the directed coherence, it was found that in both the left and right hemispheres, alpha rhythms with relatively low frequency had a significantly higher correlation in the frontal-occipital direction than in the opposite direction. In the upper alpha frequency band, a significantly higher DC was observed in the occipital-frontal direction, and the right-left DC in the occipital area was consistently higher. The activity of rhythms near 10 Hz was widespread. These results suggest that there is a difference in the genesis and the structure of information transmission in the lower and upper band, and for 10-Hz alpha waves.
[ "coherence", "eeg", "autoregressive model", "alpha rhythm" ]
[ "P", "P", "P", "P" ]
37fT7FT
Learning finite binary sequences from half-space data
The problem of inferring a finite binary sequence w* is an element of (-1, 1)(n) is considered. It is supposed that at epochs t = 1, 2,..., the learner is provided with random half-space data in the form of finite binary sequences u((t)is an element of) {-1, 1}(n) which have positive inner-product with w*. The goal of the learner is to determine the underlying sequence w* in an efficient, on-line fashion from the data {u((t)), t greater than or equal to 1}. In this context, it is shown that the randomized, on-line directed drift algorithm produces a sequence of hypotheses {w((t)) is an element of {-1, 1}(n), t greater than or equal to 1} which converges to w* in finite time with probability 1. It is shown that while the algorithm has a minimal space complexity of 2n bits of scratch memory, it has exponential time complexity with an expected mistake bound of order Ohm(e(0.139n)). Batch incarnations of the algorithm are introduced which allow for massive improvements in running time with a relatively small cost in space (batch size). In particular, using a batch of O(n log n) examples at each update epoch reduces the expected mistake bound of the (batch) algorithm to O(n) (in an asynchronous bit update mode) and O(1) (in a synchronous bit update mode). The problem considered here is related to binary integer programming and to learning in a mathematical model of a neuron. (C) 1999 John Wiley & Sons, Inc.
[ "directed drift", "neuron", "on-line learning", "batch learning", "binary perceptron" ]
[ "P", "P", "R", "R", "M" ]
3N:VhGW
projection-based statistical analysis of full-chip leakage power with non-log-normal distributions
In this paper we propose a novel projection-based algorithm to estimate the full-chip leakage power with consideration of both inter-die and intra-die process variations. Unlike many traditional approaches that rely on log-Normal approximations, the proposed algorithm applies a novel projection method to extract a low-rank quadratic model of the logarithm of the full-chip leakage current and, therefore, is not limited to log-Normal distributions. By exploring the underlying sparse structure of the problem, an efficient algorithm is developed to extract the non-log-Normal leakage distribution with linear computational complexity in circuit size. In addition, an incremental analysis algorithm is proposed to quickly update the leakage distribution after changes to a circuit are made. Our numerical examples in a commercial 90nm CMOS process demonstrate that the proposed algorithm provides 4x error reduction compared with the previously proposed log-Normal approximations, while achieving orders of magnitude more efficiency than a Monte Carlo analysis with 104 samples.
[ "statistical analysis", "leakage power" ]
[ "P", "P" ]
-6iYn-9
Hybrid numerical methods for convection-diffusion problems in arbitrary geometries
The hybrid nodal-integral/finite element method (NI-FEM) and the hybrid nodal-integral/finite analytic method (NI-FAM) are developed to solve the steady-state, two-dimensional convection-diffusion equation (CDE). The hybrid NI-FAM for the steady-state problem is then extended to solve the more general time-dependent, two-dimensional, CDE. These hybrid coarse mesh methods, unlike the conventional nodal-integral approach, are applicable in arbitrary geometries and maintain the high efficiency of the conventional nodal-integral method (NIM). In steady-state problems, the computational domain for both hybrid methods is discretized using rectangular nodes in the interior of the domain and along vertical and horizontal boundaries, while triangular nodes are used along the boundaries that are not parallel to the x or y axes. In time-dependent problems, the rectangular and triangular nodes become space time parallelepiped and wedge-shaped nodes, respectively. The difference schemes for the variables on the interfaces of adjacent rectangular/parallelepiped nodes are developed using the conventional NIM. For the triangular nodes in the hybrid NI-FEM, a trial function is written in terms of the edge-averaged concentration of the three edges and made to satisfy the CDE in an integral sense. In the hybrid NI-FAM, the concentration over the triangular/wedge-shaped nodes is represented using a finite analytic approximation, which is based on the analytic solution of the one-dimensional CDE. The difference schemes for both hybrid methods are then developed for the interfaces between the rectangular/parallelepiped and triangular/wedge-shaped nodes by imposing continuity of the flux across the interfaces. A formal derivation of these hybrid methods and numerical results for several test problems are presented and discussed. (C) 2002 Elsevier Science Ltd. All rights reserved.
[ "arbitrary geometries", "convection-diffusion equation", "nodal-integral method", "finite analytic method", "finite element method" ]
[ "P", "P", "P", "R", "R" ]
-1wUsen
Constrained Ellipse Fitting with Center on a Line
Fitting an ellipse to given data points is a common optimization task in computer vision problems. However, the possibility of incorporating the prior constraint the ellipses center is located on a given line into the optimization algorithm has not been examined so far. This problem arises, for example, by fitting an ellipse to data points representing the path of the image positions of an adhesion inside a rotating vessel whose position of the rotational axis in the image is known. Our new method makes use of a constrained algebraic cost function with the incorporated ellipse center on given line-prior condition in a global convergent one-dimensional optimization approach. Further advantages of the algorithm are computational efficiency and numerical stability.
[ "ellipse fitting", "constrained cost function", "eigenvalue problem" ]
[ "P", "R", "M" ]
3D8UN38
Trellis: Portability across architectures with a high-level framework
Trellis shows programmability benefits of a common and portable set of directives. We illustrate descriptive capability of directives that can support portable codes. We enhance the OpenACC model with more efficient mapping and synchronization. We implement prototype source translation of Trellis to OpenMP, OpenACC and CUDA.
[ "parallel computation", "parallel frameworks", "parallel architectures", "loop mapping" ]
[ "U", "M", "M", "M" ]
1GJGkb1
pedagogical content knowledge in programming education for secondary school
Dissertation overview, addressing the concept of Pedadogical Content Knowledge for the teaching and learning of programming for secondary education.
[ "pedagogical content knowledge", "programming", "secondary school", "teaching", "learning" ]
[ "P", "P", "P", "P", "P" ]
1uB3Fyt
Interference Analysis for Highly Directional 60-GHz Mesh Networks: The Case for Rethinking Medium Access Control
We investigate spatial interference statistics for multigigabit outdoor mesh networks operating in the unlicensed 60-GHz "millimeter (mm) wave" band. The links in such networks are highly directional: Because of the small carrier wavelength (an order of magnitude smaller than those for existing cellular and wireless local area networks), narrow beams are essential for overcoming higher path loss and can be implemented using compact electronically steerable antenna arrays. Directionality drastically reduces interference, but it also leads to "deafness," making implicit coordination using carrier sense infeasible. In this paper, we make a quantitative case for rethinking medium access control (MAC) design in such settings. Unlike existing MAC protocols for omnidirectional networks, where the focus is on interference management, we contend that MAC design for 60-GHz mesh networks can essentially ignore interference and must focus instead on the challenge of scheduling half-duplex transmissions with deaf neighbors. Our main contribution is an analytical framework for estimating the collision probability in such networks as a function of the antenna patterns and the density of simultaneously transmitting nodes. The numerical results from our interference analysis show that highly directional links can indeed be modeled as pseudowired, in that the collision probability is small even with a significant density of transmitters. Furthermore, simulation of a rudimentary directional slotted Aloha protocol shows that packet losses due to failed coordination are an order of magnitude higher than those due to collisions, confirming our analytical results and highlighting the need for more sophisticated coordination mechanisms.
[ "interference analysis", "medium access control (mac)", "60-ghz networks", "millimeter (mm) wave networks", "wireless mesh networks" ]
[ "P", "P", "R", "R", "R" ]
1UiwQ68
The influence of skeletal muscle anisotropy on electroporation: in vivo study and numerical modeling
The aim of this study was to theoretically and experimentally investigate electroporation of mouse tibialis cranialis and to determine the reversible electroporation threshold values needed for parallel and perpendicular orientation of the applied electric field with respect to the muscle fibers. Our study was based on local electric field calculated with three-dimensional realistic numerical models, that we built, and in vivo visualization of electroporated muscle tissue. We established that electroporation of muscle cells in tissue depends on the orientation of the applied electric field; the local electric field threshold values were determined (pulse parameters: 8100?s, 1Hz) to be 80V/cm and 200V/cm for parallel and perpendicular orientation, respectively. Our results could be useful electric field parameters in the control of skeletal muscle electroporation, which can be used in treatment planning of electroporation based therapies such as gene therapy, genetic vaccination, and electrochemotherapy.
[ "skeletal muscle", "in vivo electroporation", "tissue anisotropy", "magnetic resonance imaging", "local electric field distribution" ]
[ "P", "R", "R", "U", "M" ]
511aJLg
The role of commutativity in constraint propagation algorithms
Constraint propagation algorithms form an important part of most of the constraint programming systems. We provide here a simple, yet very general framework that allows us to explain several constraint propagation algorithms in a systematic way. In this framework we proceed in two steps. First, we introduce a generic iteration algorithm on partial orderings and prove its correctness in an abstract setting. Then we instantiate this algorithm with specific partial orderings and functions to obtain specific constraint propagation algorithms. In particular, using the notions commutativity and semi-commutativity, we show that the AC-3, PC-2, DAC, and DPC algorithms for achieving (directional) are consistency and (directional) path consistency are instances of a single generic algorithm. The work reported here extends and simplifies that of Apt [1999a].
[ "commutativity", "constraint propagation", "generic algorithms" ]
[ "P", "P", "P" ]
45tbxHR
SmartRank: a smart scheduling tool for mobile cloud computing
Resource scarcity is a major obstacle for many mobile applications, since devices have limited energy power and processing potential. As an example, there are applications that seamlessly augment human cognition and typically require resources that far outstrip mobile hardwares capabilities, such as language translation, speech recognition, and face recognition. A new trend has been explored to tackle this problem, the use of cloud computing. This study presents SmartRank, a scheduling framework to perform load partitioning and offloading for mobile applications using cloud computing to increase performance in terms of response time. We first explore a benchmarking of face recognition application using mobile cloud and confirm its suitability to be used as case study with SmartRank. We have applied the approach to a face recognition process based on two strategies: cloudlet federation and resource ranking through balanced metrics (level of CPU utilization and round-trip time). Second, using a full factorial experimental design we tuned the SmartRank with the most suitable partitioning decision calibrating scheduling parameters. Nevertheless, SmartRank uses an equation that is extensible to include new parameters and make it applicable to other scenarios.
[ "mobile cloud computing", "partitioning", "offloading", "performance evaluation" ]
[ "P", "P", "P", "M" ]
-oc718C
New global exponential stability conditions for inertial CohenGrossberg neural networks with time delays ?
In this paper, global exponential stability of inertial CohenGrossberg neural networks with time delays is investigated. By using Homeomorphism theorem and inequality technique, a LMI-based global exponential stability condition and inequality form global exponential stability condition are obtained for the above neural networks. In our result, the assumptions for the differentiability and monotonicity on the behaved functions in Ke and Miao (2013) [23] are removed. Thus our results are less conservative than those obtained in Ke and Miao (2013) [23]. Hence, we obtain new global exponential stability for this neural network.
[ "inertial cohengrossberg neural networks", "homeomorphism", "inequality technique", "lyapunov functional", "lmi" ]
[ "P", "P", "P", "M", "U" ]
4SD1W4&
Extracting the fetal heart rate variability using a frequency tracking algorithm
In this work, we propose an algorithm to extract the fetal heart rate variability from an ECG measured from the mother abdomen. The algorithm consists of two methods: a separation algorithm based on second-order statistics that extracts the desired signal in one shot through the data, and a heart instantaneous frequency (HIF) estimator. The HIF algorithm is used to extract the mother heart rate which serves as reference to extract the fetal heart rate. We carried out simulations where the signals overlap in frequency and time, and showed that it worked efficiently.
[ "second-order statistics", "source separation", "independent component analysis", "analytic signal", "a priori information", "auto-correlation" ]
[ "P", "M", "U", "M", "M", "U" ]
1uW9Yhg
Education and training in health informatics: the IT-EDUCTRA project
In this contribution both the EDUCTRA project of the European Advanced Informatics in Medicine Programme and the IT-EDUCTRA project of the Telematics Applications Programme (Health Sector) are described. EDUCTRA had as aim to investigate which gaps in knowledge health professionals have with respect to health informatics and to suggest ways to remedy this. It was assumed that health professionals had a basic understanding of health informatics and that additional educational material only had to cover the knowledge necessary for appreciating the new products coming from the AIM programme. A state-of-the-art survey revealed that the knowledge of health professionals with respect to health informatics was deplorable. Guidelines for curricula were therefore proposed to enable potential teachers to design courses. IT-EDUCTRA is a continuation of the EDUCTRA project. It has as aim to create learning materials covering a broad area of health informatics.
[ "education and training", "health informatics", "it-eductra" ]
[ "P", "P", "P" ]
xMwovkC
Methods for reasoning from geometry about anatomic structures injured by penetrating trauma
This paper presents the methods used for three-dimensional (3D) reasoning about anatomic structures affected by penetrating trauma in TraumaSCAN-Web, a platform-independent decision support system for evaluating the effects of penetrating trauma to the chest and abdomen. In assessing outcomes for an injured patient, TraumaSCAN-Web utilizes 3D models of anatomic structures and 3D models of the regions of damage associated with stab and gunshot wounds to determine the probability of injury to anatomic structures. Probabilities estimated from 3D reasoning about affected anatomic structures serve as input to a Bayesian network which calculates posterior probabilities of injury based on these initial probabilities together with available information about patient signs, symptoms and test results. In addition to displaying textual descriptions of conditions arising from penetrating trauma to a patient, TraumaSCAN-Web allows users to visualize the anatomy suspected of being injured in 3D, in this way providing a guide to its reasoning process.
[ "decision support", "3d modeling", "expert systems" ]
[ "P", "P", "M" ]
3G5TS-X
Use of effective stiffness matrix for the free vibration analyses of a non-uniform cantilever beam carrying multiple two degree-of-freedom springdampermass systems
This paper investigates the free vibration characteristics of a non-uniform cantilever beam carrying multiple two degree-of-freedom (dof) springdampermass systems by means of two finite element methods, FEM1 and FEM2. Where the FEM1 is the conventional finite element method (FEM) with each two-dof springdampermass system being considered as a finite element possessing stiffness, damping and mass matrices, while the FEM2 is an alternative approach with each two-dof springdampermass system being replaced by an effective stiffness matrix composed of four massless effective springs. Instead of using both the real part ( ) and the imaginary part ( ) of a complex eigenvalue ( ) to derive the mathematical expressions, this paper directly employs the implicit-form of the complex eigenvalue ( ) to formulate the problem. In the FEM1, since each springdampermass system has two dof, the total dof of the entire system increases two if the beam carries one more two-dof springdampermass system. However, in the FEM2, the total dof of the entire system remains unchanged, because all dof of each springdampermass system are suppressed by the effective stiffness matrix. Good agreement between the natural frequencies obtained from FEM2 and the corresponding ones from FEM1 confirms the reliability of the presented theory.
[ "effective stiffness matrix", "finite element method", "complex eigenvalue", "natural frequency", "degree of freedom (dof)" ]
[ "P", "P", "P", "P", "M" ]
13Dvro-
A DTC strategy dedicated to three-switch three-phase inverter-fed induction motor drives
Purpose - The put-pose of this paper is to describe the implementation of a direct torque control strategy dedicated to three-switch three-phase delta-shaped inverter (TSTPI) fed induction motor drives as well as the comparison of its performance with those yielded by six-switch three-phase inverter (SSTPI) fed induction motor drives under the Takahashi DTC strategy. Design/methodology/approach - Referring to the asymmetrical stator voltage vectors and in order to reach high dynamic with low ripple of the electromagnetic torque response, the design of the vector selection table should include virtual voltage vectors by the subdivision of each sector into two equal sub-sectors. Findings - It has been shown that the implementation of the proposed DTC strategy in TSTPI-fed induction motor drives leads to higher transient behaviour and better steady-state features than those exhibited by the Takahashi DTC strategy implemented in SSTPI-fed induction motor drives. Research limitations/implications - The research should be extended to a comparison of the obtained simulation results with experimental measurements. Practical implications - A 50 per cent reduction of cost and compactness associated with a 50 per cent increase of reliability makes the TSTPI an interesting candidate, especially in large-scale production applications such as the automotive industry. Originality/value - The paper proposes an approach to improve the cost-effectiveness, the compactness and the reliability of TSTPI-fed induction motor drives, which represents a crucial benefit in electric and hybrid propulsion systems.
[ "torque", "vectors", "simulation", "ellectric motors" ]
[ "P", "P", "P", "M" ]
4ffNA14
Hybrid generative/discriminative classifier for unconstrained character recognition
Handwriting recognition for hand-held devices like PDAs requires very accurate and adaptive classifiers. It is such a complex classification problem that it is quite usual now to make co-operate several classification methods. In this paper, we present an original two stages recognizer. The first stage is a model-based classifier which store an exhaustive set of character models. The second stage is a pairwise classifier which separate the most ambiguous pairs of classes. This hybrid architecture is based on the idea that the correct class almost systematically belongs to the two more relevant classes found by the first classifier. Experiments on a 80,000 examples database show a 30% improvement on a 62 classes recognition problem. Moreover, we show experimentally that such an architecture suits perfectly for incremental classification.
[ "handwriting recognition", "adaptive classifier", "multiple classifier system", "pairwise neural networks", "confusion matrix" ]
[ "P", "P", "M", "M", "U" ]
-1xetjw
optimal sleep patterns for serving delay-tolerant jobs
Sleeping is an important method to reduce energy consumption in many information and communication systems. In this paper we focus on a typical server under dynamic load, where entering and leaving sleeping mode incurs an energy and a response time penalty. We seek to understand under what kind of system configuration and control method will sleep mode obtain a Pareto Optimal tradeoff between energy saving and average response time. We prove that the optimal "sleeping" policy has a simple hysteretic structure. Simulation results then show that this policy results in significant energy savings, especially for relatively delay insensitive applications and under low traffic load. However, we demonstrate that seeking the maximum energy saving presents another tradeoff: it drives up the peak temperature in the server, with potential reliability consequences.
[ "switching cost", "sleep state", "pareto tradeoff", "markov decision process", "energy efficiency" ]
[ "U", "M", "R", "U", "M" ]
13XLsuy
On channel-discontinuity-constraint routing in wireless networks
Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links Such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as "Channel-Discontinuity-Constraint" (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing 0(n10) links, for any 0 > 0, such that CDC-paths increase in cost by at most a factor t = (1 2 sin(0/2))(-2). We propose a novel distributed algorithm to compute the spanner using an expected number of 0(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using 0(n(2)) fixed-size messages, by developing an extension of Edmonds' algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in 0(n(2)) time improving the previous best algorithm which requires 0(n(3)) running time. Moreover, this running time improves to 0(n/0) when used in conjunction with the spanner developed. (C) 2011 Elsevier B.V. All rights reserved.
[ "routing", "directional antennas", "spanners", "algorithms" ]
[ "P", "P", "P", "P" ]
4ZhYJja
Composition of aspects based on a relation model: Synergy of multiple paradigms
Software composition for timely and affordable software development and evolution is one of the oldest pursuits of software engineering. In current software composition techniques, Component- Based Software Development (CBSD) and Aspect-Oriented Software Development (AOSD) have attracted academic and industrial attention. Blackbox composition used in CBSD provides simple and safe modularization for its strong information hiding, which is, however, the main obstacle for a black box composite to evolve later. This implies that an application developed through black box composition cannot take advantage of Aspect-Oriented Programming (AOP) used in AOSD. On the contrary, AOP enhances maintainability and comprehensibility by modularizing concerns crosscutting multiple components but lacks the support for the hierarchical and external composition of aspects themselves and compromises the important software engineering principles such as encapsulation, which is almost perfectly supported in black box composition. The role and role model have been recognized to have many similarities with CBSD and AOP but have significant differences with those composition techniques as well. Although each composition paradigm has its own advantages and disadvantages, there is no substantial support to realize the synergy of these composition paradigms; the black box composition, AOP, and role model. In this paper, a new composition technique based on representational abstraction of the relationship between component instances is introduced. The model supports the simple, elegant, and dynamic composition of components with its declarative form and provides the hooks through which an aspect can evolve and a parallel developed aspect can be merged at the instance level.
[ "relation model", "software composition", "black box composition", "aspect-oriented programming", "role", "component-based software development", "logic" ]
[ "P", "P", "P", "P", "P", "M", "U" ]
Zrj5taK
Generalization performance of magnitude-preserving semi-supervised ranking with graph-based regularization
Semi-supervised ranking is a relatively new and important learning problem inspired by many applications. We propose a novel graph-based regularized algorithm which learns the ranking function in the semi-supervised learning framework. It can exploit geometry of the data while preserving the magnitude of the preferences. The least squares ranking loss is adopted and the optimal solution of our model has an explicit form. We establish error analysis of our proposed algorithm and demonstrate the relationship between predictive performance and intrinsic properties of the graph. The experiments on three datasets for recommendation task and two quantitative structureactivity relationship datasets show that our method is effective and comparable to some other state-of-the-art algorithms for ranking.
[ "generalization performance", "ranking", "semi-supervised learning", "graph laplacian", "reproducing kernel hilbert space" ]
[ "P", "P", "P", "M", "U" ]
2Jn4YKw
Accessible haptic user interface design approach for users with visual impairments
With the number of people with visual impairments (e.g., low vision and blind) continuing to increase, vision loss has become one of the most challenging disabilities. Today, haptic technology, using an alternative sense to vision, is deemed an important component for effectively accessing information systems. The most appropriately designed assistive technology is critical for those with visual impairments to adopt assistive technology and to access information, which will facilitate their tasks in personal and professional life. However, most of the existing design approaches are inapplicable and inappropriate to such design contexts as users with visual impairments interacting with non-graphical user interfaces (i.e., haptic technology). To resolve such design challenges, the present study modified a participatory design approach (i.e., PICTIVE, Plastic Interface for Collaborative Technology Initiatives Video Exploration) to be applicable to haptic technologies, by considering the brain plasticity theory. The sense of touch is integrated into the design activity of PICTIVE. Participants with visual impairments were able to effectively engage in designing non-visual interfaces (e.g., haptic interfaces) through non-visual communication methods (e.g., touch modality).
[ "accessibility", "visual impairments", "non-visual interfaces", "human factors", "design method", "usability" ]
[ "P", "P", "P", "U", "R", "U" ]
99HbiNj
effect of probabilistic task allocation based on statistical analysis of bid values
This paper presents the effect of adaptively introducing appropriate strategies into the award phase of the contract net protocol (CNP) in a massively multi-agent system (MMAS).
[ "task allocation", "contract net protocol", "massively multi-agent systems", "coordination" ]
[ "P", "P", "P", "U" ]
3ZJ3mMq
higher-order concurrent programs with finite communication topology (extended abstract)
Concurrent ML (CML) is an extension of the functional language Standard ML(SML) with primitives for the dynamic creation of processes and channels and for the communication of values over channels. Because of the powerful abstraction mechanisms the communication topology of a given program may be very complex and therefore an efficient implementation may be facilitated by knowledge of the topology. This paper presents an analysis for determining when a bounded number of processes and channels will be generated. The analysis proceeds in two stages. First we extend a polymorphic type system for SML to deduce not only the type of CML programs but also their communication behaviour expressed as terms in a new process algebra. Next we develop an analysis that given the communication behaviour predicts the number of processes and channels required during the execution of the CML program. The correctness of the analysis is proved using a subject reduction property for the type system.
[ "concurrent program", "program", "communication", "topologies", "abstraction", "extensibility", "functional languages", "standardization", "dynamic", "process", "values", "complexity", "efficiency", "implementation", "knowledge", "paper", "analysis", "polymorphic", "type system", "process algebra", "correctness", "reduction", " ml ", "order" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U" ]
56z3UC1
On the polyhedral structure of a multi-item production planning model with setup times
We present and study a mixed integer programming model that arises as a substructure in many industrial applications. This model generalizes a number of structured MIP models previously studied, and it provides a relaxation of various capacitated production planning problems and other fixed charge network flow problems. We analyze the polyhedral structure of the convex hull of this model, as well as of a strengthened LP relaxation. Among other results, we present valid inequalities that induce facets of the convex hull under certain conditions. We also discuss how to strengthen these inequalities by using known results for lifting valid inequalities for 0-1 continuous knapsack problems.
[ "production planning", "mixed integer programming", "fixed charge network flow", "polyhedral combinatorics", "capacitated lot-sizing" ]
[ "P", "P", "P", "M", "M" ]
1urPgQY
how appearance of robotic agents affects how people interpret the agents' attitudes
An experimental investigation of how the appearance of robotic agents affects interpretations people make of the agents' attitudes is described. We conducted a psychological experiment where participants were presented artificial sounds that can make people estimate specific agents' primitive attitudes from three kinds of agents, e.g. , a Mindstorms robot, AIBO robot, and normal laptop PC. They were also asked to select the correct attitudes based on the sounds expressed by these three agents. The results showed that the participants had higher interpretation rates when a PC presented the sounds, while they had lower rates when Mindstorms and AIBO robots presented the sounds, even though the artificial sounds expressed by these agents were completely the same.
[ "agents' attitudes", "subtle expressions", "appearance of agents", "human-agent interaction" ]
[ "P", "M", "R", "U" ]
3L6veei
Coherence between one random and one periodic signal for measuring the strength of responses in the electro-encephalogram during sensory stimulation
Coherence between a pulse train representing periodic stimuli and the EEG has been used in the objective detection of steady-state evoked potentials. This work aimed to quantify the strength of the stimulus responses based on the statistics of coherence estimate between one random and one periodic signal focusing on the confidence limits and power of significance tests in detecting responses. To detect the responses in 95% of cases, a signal-to-noise ratio of about -7.9 dB was required when using 48 windows (M) in the coherence estimation. The ratio, however, increased to -1.2 dB when M was 12. The results were tested in Monte Carlo simulations and applied to EEGs obtained from 14 subjects during visual stimulation. The method showed differences in the strength of responses at the stimulus frequency and its harmonics, as well as variations between individuals and over cortical regions. In contrast to those from the parietal and temporal regions, results for the occipital region gave confidence limits (with M = 12) that were above zero for all subjects, indicating statistically significant responses. The proposed technique extends the usefulness of coherence as a measure of stimulus responses and allows statistical analysis that could also be applied usefully in a range of other biological signals.
[ "coherence", "eeg", "statistics", "rhythmic stimulation", "synchrony measure" ]
[ "P", "P", "P", "M", "M" ]
-FAEq&C
Exploring the dynamics of adaptation with evolutionary activity plots
Evolutionary activity statistics and their visualization are introduced, and their motivation is explained. Examples of their use are described, and their strengths and limitations are discussed. References to more extensive or general accounts of these techniques are provided.
[ "evolutionary activity", "visualization", "evolutionary adaptation" ]
[ "P", "P", "R" ]
4jLscRf
Repeated Exposure to the Abused Inhalant Toluene Alters Levels of Neurotransmitters and Generates Peroxynitrite in Nigrostriatal and Mesolimbic Nuclei in Rat
Toluene, a volatile hydrocarbon found in a variety of chemical compounds, is misused and abused by inhalation for its euphorigenic effects. Toluene's reinforcing properties may share a common characteristic with other drugs of abuse, namely, activation of the mesolimbic dopamine system. Prior studies in our laboratory found that acutely inhaled toluene activated midbrain dopamine neurons in the rat. Moreover, single systemic injections of toluene in rats produced a dose-dependent increase in locomotor activity which was blocked by depletion of nucleus accumbens dopamine or by pretreatment with a D2 dopamine receptor antagonist. Here we examined the effects of seven daily intraperitoneal injections of 600 mg/kg toluene on the content of serotonin and dopamine in the caudate nucleus (CN) and nucleus accumbens (NAC), substantia nigra, and ventral tegmental area at 2, 4, and 24 h after the last injection. Also, the roles of nitric oxide, peroxynitrite, and the production of 3-nitrosotyrosine (3-NT), in the CN and NAC were assessed at the same time points. Toluene treatments increased dopamine levels in the CN and NAC, and serotonin levels in CN, NAC, and ventral tegmental area. Measurements of the dopamine metabolite dihydroxyphenylacetic acid (DOPAC) further suggested a change in transmitter utilization in CN and NAC. Lastly, 3-NT levels also showed a differential change between CN and NAC, but at different time points post-toluene injection. These results point out the complexity of action of toluene on neurotransmitter function following a course of chronic exposure. Changes in the production of 3-NT also suggest that toluene-induced neurotoxicity may mediate via generation of peroxynitrite.
[ "inhalant", "toluene", "neurotransmitter", "peroxynitrite", "nigrostriatal and mesolimbic nuclei", "dopamine", "serotonin", "3-nitrosotyrosine", "neurotoxicity", "oxidative stress" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
42bCE9-
supporting ad-hoc ranking aggregates
This paper presents a principled framework for efficient processing of ad-hoc top-k (ranking) aggregate queries, which provide the k groups with the highest aggregates as results. Essential support of such queries is lacking in current systems, which process the queries in a nave materialize-group-sort scheme that can be prohibitively inefficient. Our framework is based on three fundamental principles. The Upper-Bound Principle dictates the requirements of early pruning, and the Group-Ranking and Tuple-Ranking Principles dictate group-ordering and tuple-ordering requirements. They together guide the query processor toward a provably optimal tuple schedule for aggregate query processing. We propose a new execution framework to apply the principles and requirements. We address the challenges in realizing the framework and implementing new query operators, enabling efficient group-aware and rank-aware query plans. The experimental study validates our framework by demonstrating orders of magnitude performance improvement in the new query plans, compared with the traditional plans.
[ "ranking", "aggregate query", "top-k query processing", "decision support", "olap" ]
[ "P", "P", "R", "M", "U" ]
3Ls5z6f
Meaningful and meaningless solutions for cooperative n-person games
Game values often represent data that can be measured in more than one acceptable way (e.g., monetary amounts). We point out that in such a case a statement about cooperative n-person game models might be meaningless in the sense that its truth or falsity depends on the choice of an acceptable way to measure game values. In particular, we analyze statements about solution concepts such as the core, stable sets, the nucleolus, the Shapley value (and some of its generalizations).
[ "robustness and sensitivity analysis", "game theory" ]
[ "M", "M" ]
yPJSFT1
Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method
We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hunds coupling terms on metalinsulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems.
[ "numerical optimization algorithm", "slave boson", "hubbard model", "metalinsulator transition" ]
[ "P", "P", "P", "P" ]
2M:ytLv
molecular dynamics simulation of large-scale carbon nanotubes on a shared-memory architecture
Carbon nanotubes are expected to play a significant role in the design and manufacture of many nano-mechanical and nano-electronic devices of future. It is important, therefore, that atomic level elastomechanical response properties of both single and multiwall nanotubes be investigated in detail. Classical molecular dynamics simulations employing Brenner's reactive potential with long range van der Waals interactions have been used in mechanistic response studies of carbon nanotubes to external strains. The studies of single and multiwalled carbon nanotubes under compressive strains show the instabilities beyond elastic response. Due to inclusion of non-bonded long range interactions, the simulations also show the redistribution of strain and strain energy from sideways bucklng to the formation of highly localized strained kink sites. Bond rearrangements occur at the kink sites, leading to formation of topological defects, preventing the tube from relaxing fully back to it's original configuration. Elastomechanic response behavior of single and multiwall carbon nanotubes to externally applied compressive strains is simulated and studied in detail. We will describe the results and discuss their implication towards the stability of any molecular mechanical structure made of carbon nanotubes.
[ "molecular dynamics", "simulation", "large-scale", "carbon nanotubes", "architecture", "play", "role", "design", "manufacturability", "device", "future", "atom", "response", "interaction", "inclusion", "energy", "configurability", "behavior", "stability", "structure", "shared memory", "mechanical properties", "origin2000", "parallel" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "R", "U", "U" ]
4RDDQvY
Nonprimitive recursive complexity and undecidability for Petri net equivalences
The aim of this note is twofold. Firstly, it shows that the undecidability result for bisimilarity in [Theor. Comput. Sci. 148 (1995) 281-301] can be immediately extended for the whole range of equivalences land preorders) on labelled Petri nets. Secondly, it shows that restricting our attention to nets with finite reachable space, the respective (decidable) problems are nonprimitive recursive; this approach also applies to Mayr and Meyer's result [J. ACM 28 (1981) 561-576] for the reachability set equality, yielding a more direct proof. (C) 2001 Elsevier Science B.V. All rights reserved.
[ "complexity", "decidability", "petri-nets" ]
[ "P", "P", "U" ]
2so33d:
time-decaying aggregates in out-of-order streams
Processing large data streams is now a major topic in data management. The data involved can be truly massive, and the required analyses complex. In a stream of sequential events such as stock feeds, sensor readings, or IP traffic measurements, data tuples pertaining to recent events are typically more important than older ones. This can be formalized via time-decay functions, which assign weights to data based on the age of data. Decay functions such as sliding windows and exponential decay have been studied under the assumption of well-ordered arrivals, i.e., data arrives in non-decreasing order of time stamps. However, data quality issues are prevalent in massive streams (due to network asynchrony and delays etc.), and correct arrival order is not guaranteed. We focus on the computation of decayed aggregates such as range queries, quantiles, and heavy hitters on out-of-order streams, where elements do not necessarily arrive in increasing order of timestamps. Existing techniques such as Exponential Histograms and Waves are unable to handle out-of-order streams. We give the first deterministic algorithms for approximating these aggregates under popular decay functions such as sliding window and polynomial decay. We study the overhead of allowing out-of-order arrivals when compared to well-ordered arrivals, both analytically and experimentally. Our experiments confirm that these algorithms can be applied in practice, and compare the relative performance of different approaches for handling out-of-order arrivals.
[ "timing", "aggregate", "streams", "data streaming", "data", "data management", "complexity", "event", "sensor", "traffic measurement", "age", "sliding window", "order", "data quality", "network", "delay", "computation", "range queries", "timestamp", "histogram", "algorithm", "approximation", "polynomial", "out-of-order arrivals", "experience", "practical", "performance", "asynchronous data streams", "relation" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "U" ]
1JKGKkx
The Village Telco project: a reliable and practical wireless mesh telephony infrastructure
VoIP (Voice over IP) over mesh networks could be a potential solution to the high cost of making phone calls in most parts of Africa. The Village Telco (VT) is an easy to use and scalable VoIP over meshed WLAN (Wireless Local Area Network) telephone infrastructure. It uses a mesh network of mesh potatoes to form a peer-to-peer network to relay telephone calls without landlines or cell phone towers. This paper discusses the Village Telco infrastructure, how it addresses the numerous difficulties associated with wireless mesh networks, and its efficient deployment for VoIP services in some communities around the globe. The paper also presents the architecture and functions of a mesh potato and a novel combined analog telephone adapter (ATA) and WiFi access point that routes calls. Lastly, the paper presents the results of preliminary tests that have been conducted on a mesh potato. The preliminary results indicate very good performance and user acceptance of the mesh potatoes. The results proved that the infrastructure is deployable in severe and under-resourced environments as a means to make cheap phone calls and render Internet and IP-based services. As a result, the VT project contributes to bridging the digital divide in developing areas.
[ "village telco", "voip", "wlan", "mesh potato", "wireless mesh networks", "rural telephony" ]
[ "P", "P", "P", "P", "P", "M" ]
Lu3Kza4
General Subspace Learning With Corrupted Training Data Via Graph Embedding
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
[ "subspace learning", "corrupted training data", "graph embedding", "discriminant analysis" ]
[ "P", "P", "P", "P" ]
-uTnU8n
CLOSURE PROPERTIES OF HYPER-MINIMIZED AUTOMATA
Two deterministic finite automata are almost equivalent if they disagree in acceptance only for finitely many inputs. An automaton A is hyper-minimized if no automaton with fewer states is almost equivalent to A. A regular language L is canonical if the minimal automaton accepting L is hyper-minimized. The asymptotic state complexity s*(L) of a regular language L is the number of states of a hyper-minimized automaton for a language finitely different from L. In this paper we show that: (1) the class of canonical regular languages is not closed under: intersection, union, concatenation, Kleene closure, difference, symmetric difference, reversal, homomorphism, and inverse homomorphism; (2) for any regular languages L(1) and L(2) the asymptotic state complexity of their sum L(1) boolean OR L(2), intersection L(1) boolean AND L(2), difference L(1) - L(2), and symmetric difference L(1) circle plus L(2) can be bounded by s*(L(1)) . s*(L(2)). This bound is tight in binary case and in unary case can be met in infinitely many cases. (3) For any regular language L the asymptotic state complexity of its reversal L(R) can be bounded by 2(s)* (L). This bound is tight in binary case. (4) The asymptotic state complexity of Kleene closure and concatenation cannot be bounded. Namely, for every k >= 3, there exist languages K, L, and M such that s*(K) = s*(L) = s*(M) = 1 and s*(K*) = s*(L . M) = k. These are answers to open problems formulated by Back et al. [RAIRO-Theor. Inf. Appl. 43 (2009) 69-94]
[ "hyper-minimized automata", "regular languages", "finite state automata" ]
[ "P", "P", "R" ]
2k7t51B
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models.
[ "feature selection", "sequential data model", "bayes learning", "algebraic geometry" ]
[ "P", "R", "M", "U" ]
-fjJLQK
A dual-scale lattice gas automata model for gas-solid two-phase flow in bubbling fluidized beds
Modelling the hydrodynamics of gas/solid flow is important for the design and scale-up of fluidized bed reactors. A novel gas/solid dual-scale model based on lattice gas cellular automata (LGCA) is proposed to describe the macroscopic behaviour through microscopic gas-solid interactions. Solid particles and gas pseudo-particles are aligned in lattices with different scales for solid and gas. In addition to basic LGCA rules, additional rules for collision and propagation are specifically designed for gas-solid systems. The solid's evolution is then motivated by the temporal and spatial average momentum gained through solid-solid and gas-solid interactions. A statistical method, based on the similarity principle, is derived for the conversion between model parameters and hydrodynamic properties. Simulations for bubbles generated from a vertical jet in a bubbling fluidized bed based on this model agree well with experimental results, as well as with the results of two-fluid approaches and discrete particle simulations. (C) 2011 Elsevier Ltd. All rights reserved.
[ "model", "bubbling fluidized beds", "gas/solid flow", "lattice gas cellular automata" ]
[ "P", "P", "P", "P" ]
3wfM5iN
scalable algorithms for global snapshots in distributed systems
Existing algorithms for global snapshots in distributed systems are not scalable when the underlying topology is complete. In a network with N processors, these algorithms require O ( N ) space and O ( N ) messages per processor. As a result, these algorithms are not efficient in large systems when the logical topology of the communication layer such as MPI is complete. In this paper, we propose three algorithms for global snapshot: a grid-based, a tree-based and a centralized algorithm. The grid-based algorithm uses O ( N ) space but only O (? N ) messages per processor. The tree-based algorithm requires only O (1) space and O (log N log w ) messages per processor where w is the average number of messages in transit per processor. The centralized algorithm requires only O (1) space and O (log w ) messages per processor. We also have a matching lower bound for this problem. Our algorithms have applications in checkpointing, detecting stable predicates and implementing synchronizers. We have implemented our algorithms on top of the MPI library on the Blue Gene/L supercomputer. Our experiments confirm that the proposed algorithms significantly reduce the message and space complexity of a global snapshot.
[ "checkpointing", "stable predicates", "blue gene/l", "fault tolerance", "global snapshot algorithms" ]
[ "P", "P", "P", "U", "R" ]
2Txeraq
A smart TCP acknowledgment approach for multihop wireless networks
Reliable data transfer is one of the most difficult tasks to be accomplished in multihop wireless networks. Traditional transport protocols like TCP face severe performance degradation over multihop networks given the noisy nature of wireless media as well as unstable connectivity conditions in place. The success of TCP in wired networks motivates its extension to wireless networks. A crucial challenge faced by TCP over these networks is how to operate smoothly with the 802.11 wireless MAC protocol which also implements a retransmission mechanism at link level in addition to short RTS/CTS control frames for avoiding collisions. These features render TCP acknowledgments (ACK) transmission quite costly. Data and ACK packets cause similar medium access overheads despite the much smaller size of the ACKs. In this paper, we further evaluate our dynamic adaptive strategy for reducing ACK-induced overhead and consequent collisions. Our approach resembles the sender side's congestion control. The receiver is self-adaptive by delaying more ACKs under nonconstrained channels and less otherwise. This improves not only throughput but also power consumption. Simulation evaluations exhibit significant improvement in several scenarios.
[ "wireless multihop networks", "transport control protocol", "delayed acknowledgments" ]
[ "R", "R", "R" ]
-ie-LcM
Vasopressin and social odor processing in the olfactory bulb and anterior olfactory nucleus
Central vasopressin facilitates social recognition and modulates numerous complex social behaviors in mammals, including parental behavior, aggression, affiliation, and pair-bonding. In rodents, social interactions are primarily mediated by the exchange of olfactory information, and there is evidence that vasopressin signaling is important in brain areas where olfactory information is processed. We recently discovered populations of vasopressin neurons in the main and accessory olfactory bulbs and anterior olfactory nucleus that are involved in the processing of social odor cues. In this review, we propose a model of how vasopressin release in these regions, potentially from the dendrites, may act to filter social odor information to facilitate odor-based social recognition. Finally, we discuss recent human research linked to vasopressin signaling and suggest that our model of priming-facilitated vasopressin signaling would be a rewarding target for further studies, as a failure of priming may underlie pathological changes in complex behaviors.
[ "social recognition", "olfaction", "social memory" ]
[ "P", "U", "M" ]
-Mts&-t
Ticks, Tick-Borne Rickettsiae, and Coxiella burnetii in the Greek Island of Cephalonia
Domestic animals are the hosts of several tick species and the reservoirs of some tick-borne pathogens; hence, they play an important role in the circulation of these arthropods and their pathogens in nature. They may act as vectors, but, also, as reservoirs of spotted fever group (SFG) rickettsiae, which are the causative agents of SFG rickettsioses. Q fever is a worldwide zoonosis caused by Coxiella burnetii (C. burnetii), which can be isolated from ticks. A total of 1,848 ticks (954 female, 853 male, and 41 nymph) were collected from dogs, goats, sheep, cattle, and horses in 32 different localities of the Greek island of Cephalonia. Rhipicephalus (Rh.) bursa, Rh. turanicus, Rh. sanguineus, Dermacentor marginatus (D. marginatus), Ixodes gibbosus (I. gibbosus), Haemaphysalis (Ha.) punctata, Ha. sulcata, Hyalomma (Hy.) anatolicum excavatum and Hy. marginatum marginatum were the species identified. C. burnetii and four different SFG rickettsiae, including Rickettsia (R.) conorii, R. massiliae, R. rhipicephali, and R. aeschlimannii were detected using molecular methods. Double infection with R. massiliae and C. burnetii was found in one of the positive ticks
[ "ticks", "coxiella burnetii", "rickettsia conorii", "rickettsia massiliae", "rickettsia rhipicephali", "rickettsia aeschlimannii", "greece" ]
[ "P", "P", "R", "R", "R", "R", "U" ]
4QNBvUf
Quiver polynomials in iterated residue form
Degeneracy loci polynomials for quiver representations generalize several important polynomials in algebraic combinatorics. In this paper we give a nonconventional generating sequence description of these polynomials when the quiver is of Dynkin type.
[ "quiver", "iterated residues", "degeneracy loci", "equivariant cohomology" ]
[ "P", "P", "P", "U" ]
-9nSjpT
The relationship among soft sets, soft rough sets and topologies
Molodtsovs soft set theory is a newly emerging tool to deal with uncertain problems. Based on the novel granulation structures called soft approximation spaces, Feng et al. initiated soft rough approximations and soft rough sets. Fengs soft rough sets can be seen as a generalized rough set model based on soft sets, which could provide better approximations than Pawlaks rough sets in some cases. This paper is devoted to establishing the relationship among soft sets, soft rough sets and topologies. We introduce the concept of topological soft sets by combining soft sets with topologies and give their properties. New types of soft sets such as keeping intersection soft sets and keeping union soft sets are defined and supported by some illustrative examples. We describe the relationship between rough sets and soft rough sets. We obtain the structure of soft rough sets and the topological structure of soft sets, and reveal that every topological space on the initial universe is a soft approximating space.
[ "soft sets", "soft rough sets", "rough sets", "topologies", "soft rough approximations", "topological soft sets" ]
[ "P", "P", "P", "P", "P", "P" ]
2sX&8Fh
A highly efficient VLSI architecture for H.264/AVC CAVLC decoder
In this paper, an efficient algorithm is proposed to improve the decoding efficiency of the context-based adaptive variable length coding (CAVLC) procedure. Due to the data dependency among symbols in the decoding How, the CAVLC decoder requires large computation time, which dominates the overall decoder system performance. To expedite its decoding speed, the critical path in the CAVLC decoder is first analyzed and then reduced by forwarding the adaptive detection for succeeding symbols. With a shortened critical path, the CAVLC architecture is further divided into two segments, which can be easily implemented by a pipeline structure. Consequently, the overall performance is effectively improved. In the hardware implementation, a low power combined LUT and single output buffer have been adopted to reduce the area as well as power consumption without affecting the decoding performance. Experimental results show that the proposed architecture surpassing other recent designs can approximately reduce power consumption by 40% and achieve three times decoding speed in comparison to the original decoding procedure suggested in the H.264 standard. The maximum frequency can be larger than 210 MHz, which can easily support the real-time requirement for resolutions higher than the HD1080 format.
[ "h.264/avc", "context-based adaptive variable length coding (cavlc)", "variable length coding" ]
[ "P", "P", "P" ]
-b8TNQh
building database applications of virtual reality with x-vrml
A new method of building active database-driven virtual reality applications is presented. The term "active" is used to describe applications that allow server-side user interaction, dynamic composition of virtual scenes, access to on-line data, continuous visualization, and implementation of persistency.The use the X-VRML language for building active applications of virtual reality is proposed. X-VRML is a high-level XML-based language that overcomes the main limitations of the current virtual reality systems by providing convenient access to databases, object-orientation, parameterization, and imperative programming techniques. Applications of X-VRML include on-line data visualization, geographical information systems, scientific visualization, virtual games, and e-commerce applications such as virtual shops. In this paper, methods of accessing databases from X-VRML are described, architectures of X-VRML systems for different application domains are discussed, and examples of database applications of virtual reality implemented in X-VRML are presented.
[ "databases", "data", "applications", "virtualization", "method", "activation", "use", "user interaction", "dynamic", "compositing", "access", "continuation", "visualization", "implementation", "language", "systems", "parameterization", "program", "data visualization", "information system", "games", "paper", "architecture", "domain", "examples", "java", " virtual reality ", "web3d", "object-oriented", "mpeg-4", "multimedia", "xml", "vrml", "scientific visualiztion", "server" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M", "U", "U", "U", "U", "U", "U", "M", "U" ]
488ZHxs
Distributed H infinity filtering for sensor networks with switching topology
In this article, the distributed H filtering problem is investigated for a class of sensor networks under topology switching. The main purpose is to design the distributed H filter that allows one to regulate the sensor's working modes. Firstly, a switched system model is proposed to reflect the working mode change of the sensors. Then, a stochastic sequence is adopted to model the packet dropout phenomenon occurring in the channels from the plant to the networked sensors. By utilising the Lyapunov functional method and stochastic analysis, some sufficient conditions are established to ensure that the filtering error system is mean-square exponentially stable with a prescribed H performance level. Furthermore, the filter parameters are determined by solving a set of linear matrix inequalities (LMIs). Our results relates the decay rate of the filtering error system to the switching frequency of the topology directly and shows the existence of such a distributed filter when the topology is not varying very frequently, which is helpful for the sensor state regulation. Finally, the effectiveness of the proposed design method is demonstrated by two numerical examples.
[ "sensor networks", "switching topology", "exponentially stable", "lmis", "distributed filtering", "energy efficient" ]
[ "P", "P", "P", "P", "P", "U" ]
4Kd7bPb
The evolution of goal-based information modelling: literature review
Purpose - The first in a series on goal-based information modelling, this paper presents a literature review of two goal-based measurement methods. The second article in the series will build on this background to present an overview of some recent case-based research that shows the applicability of the goal-based methods for information modelling (as opposed to measurement). The third and concluding article in the series will present a new goal-based information model - the goal-based information framework (GbIF) - that is well suited to the task of documenting and evaluating organisational information flow. Design/methodology/approach - Following a literature review of the goal-question-metric (GQM) and goal-question-indicator-measure (GQIM) methods, the paper presents the strengths and weaknesses of goal-based approaches. Findings - The literature indicates that the goal-based methods are both rigorous and adaptable. With over 20 years of use, goal-based methods have achieved demonstrable and quantifiable results in both practitioner and academic studies. The down side of the methods are the potential expense and the "expansiveness" of goal-based models. The overheads of managing the goal-based process, from early negotiations on objectives and goals to maintaining the model (adding new goals, questions and indicators), could make the method unwieldy and expensive for organisations with limited resources. An additional challenge identified in the literature is the narrow focus of "top-down" (i.e. goal-based) methods. Since the methods limit the focus to a pre-defined set of goals and questions, the opportunity for discovery of new information is limited. Research limitations/implications - Much of the previous work on goal-based methodologies has been confined to software measurement contexts in larger organisations with well-established information gathering processes. Although the next part of the series presents goal-based methods outside of this native context, and within low maturity organisations, further work needs to be done to understand the applicability of these methods in the information science discipline. Originality/value - This paper presents ail overview of goal-based methods. The next article in the series will present the method outside the native context of software measurement. With the universality of the method established, information scientists will have a new tool to evaluate and document organisational information flow.
[ "information", "modelling" ]
[ "P", "P" ]
-EY6Zdw
A communication reduction approach to iteratively solve large sparse linear systems on a GPGPU cluster
Finite Element Methods (FEM) are widely used in academia and industry, especially in the fields of mechanical engineering, civil engineering, aerospace, and electrical engineering. These methods usually convert partial difference equations into large sparse linear systems. For complex problems, solving these large sparse linear systems is a time consuming process. This paper presents a parallelized iterative solver for large sparse linear systems implemented on a GPGPU cluster. Traditionally, these problems do not scale well on GPGPU clusters. This paper presents an approach to reduce the communications between cluster compute nodes for these solvers. Additionally, computation and communication are overlapped to reduce the impact of data exchange. The parallelized system achieved a speedup of up to 15.3 times on 16 NVIDIA Tesla GPUs, compared to a single GPU. An analytical evaluation of the algorithm is conducted in this paper, and the analytical equations for predicting the performance are presented and validated.
[ "communication reduction", "sparse linear systems", "gpgpu cluster", "iterative solver" ]
[ "P", "P", "P", "P" ]
-4:wVZ7
transductive inference using multiple experts for brushwork annotation in paintings domain
Many recent studies perform annotation of paintings based on brushwork. In these studies the brushwork is modeled indirectly as part of the annotation of high-level artistic concepts such as the artist name using low-level texture. In this paper, we develop a serial multi-expert framework for explicit annotation of paintings with brushwork classes. In the proposed framework, each individual expert implements transductive inference by exploiting both labeled and unlabelled data. To minimize the problem of noise in the feature space, the experts select appropriate features based on their relevance to the brushwork classes. The selected features are utilized to generate several models to annotate the unlabelled patterns. The experts select the best performing model based on Vapnik combined bound. The transductive annotation using multiple experts out-performs the conventional baseline method in annotating patterns with brushwork classes.
[ "transductive inference", "inference", "brushwork", "annotation", "painting", "model", "concept", "paper", "class", "data", "noise", "feature", "space", "select", "relevance", "pattern", "method", " framework ", "feature selection" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R" ]
1BBeRUi
On the integration of equations of motion for particle-in-cell codes
An area-preserving implementation of the 2nd order Runge-Kutta integration method for equations of motion is presented. For forces independent of velocity the scheme possesses the same numerical simplicity and stability as the leapfrog method, and is not implicit for forces which do depend on velocity. It can be therefore easily applied where the leapfrog method in general cannot. We discuss the stability of the new scheme and test its performance in calculations of particle motion in three cases of interest. First, in the ubiquitous and numerically demanding example of nonlinear interaction of particles with a propagating plane wave, second, in the case of particle motion in a static magnetic field and, third, in a nonlinear dissipative case leading to a limit cycle. We compare computed orbits with exact orbits and with results from the leapfrog and other low-order integration schemes. Of special interest is the role of intrinsic stochasticity introduced by time diferencing, which can destroy orbits of an otherwise exactly integrable system and therefore constitutes a restriction on the applicability of an integration scheme in such a context [A. Friedman, S.P. Auerbach, J. Comput. Phys. 93 (1991) 171]. In particular, we show that for a plane wave the new scheme proposed herein can be reduced to a symmetric standard map. This leads to the nonlinear stability condition Delta t omega(B) <= 1, where Delta t is the time step and omega(B) the particle bounce frequency. (c) 2005 Elsevier Inc. All rights reserved.
[ "equations of motion", "2nd order integration methods", "nonlinear oscillations" ]
[ "P", "R", "M" ]
-jmAkLJ
system support for mobile augmented reality services
Developing and deploying augmented reality (AR) services in pervasive computing environments is quite difficult because almost of all current systems require heavy and bulky head-mounted displays (HMDs) and are based on inflexible centralized architectures for detecting service locations and superimposing AR images. We propose a light-weight mobile AR service framework that combines personal mobile devices most of people own nowadays, visual tags as inexpensive AR techniques, and mobile code that enables easy-to-deploy environments. Our framework enables developers to easily deploy mobile AR services in pervasive computing environments and users to interact them in a both of practical and intuitive way.
[ "mobile augmented reality", "vidgets framework" ]
[ "P", "M" ]
kK2hEMa
Fabrication of the wireless systems for controlling movements of the electrical stimulus capsule in the small intestines
Diseases of the gastro-intestinal tract are becoming more prevalent. New techniques and devices, such as the wireless capsule endoscope and the telemetry capsule, that are able to measure the various signals of the digestive organs (temperature, pH, and pressure), have been developed for the observation of the digestive organs. In these capsule devices, there are no methods of moving and grasping them. In order to make a swift diagnosis and to give proper medication, it is necessary to control the moving speed of the capsule. This paper presents a wireless system for the control of movements of an electrical stimulus capsule. This includes an electrical stimulus capsule which can be swallowed and an external transmitting control system. A receiver, a receiving antenna (small multi-loop), a transmitter, and a transmitting antenna (monopole) were designed and fabricated taking into consideration the MPE, power consumption, system size, signal-to-noise ratio and the modulation method. The wireless system, which was designed and implemented for the control of movements of the electrical stimulus capsule, was verified by in-vitro experiments which were performed on the small intestines of a pig. As a result, we found that when the small intestines are contracted by electrical stimuli, the capsule can move to the opposite direction, which means that the capsule can go up or down in the small intestines.
[ "wireless system", "electrical stimulus capsule", "wireless capsule endoscope", "moving speed", "receiver", "transmitter", "in-vitro experiments", "small multi-loop" ]
[ "P", "P", "P", "P", "P", "P", "P", "M" ]
3A9J5AN
A CONTINUOUS WAVELET-BASED APPROACH TO DETECT ANISOTROPIC PROPERTIES IN SPATIAL POINT PROCESSES
A two-dimensional stochastic point process can be regarded as a random measure and thus represented as a (countable) sum of Delta Dirac measures concentrated at some points. Integration with respect to the point process itself leads to the concept of the continuous wavelet transform of a point process. Applying then suitable translation, rotation and dilation operations through a non unitary operator, we obtain a transformed point process which highlights main properties of the original point process. The choice of the mother wavelet is relevant and we thus conduct a detailed analysis proposing three two-dimensional mother wavelets. We use this approach to detect main directions present in the point process, and to test for anisotropy.
[ "random measure", "continuous wavelet transform", "transformed point processes", "anisotropic point processes", "curvature", "end-stopped mother wavelet", "mexican hat mother wavelet", "morlet mother wavelet", "energy density position representation" ]
[ "P", "P", "P", "R", "U", "M", "M", "M", "U" ]
3f8GtHk
ROBUST OBJECT TRACKING USING JOINT COLOR-TEXTURE HISTOGRAM
A novel object tracking algorithm is presented in this paper by using the joint color-texture histogram to represent a target and then applying it to the mean shift framework. Apart from the conventional color histogram features, the texture features of the object are also extracted by using the local binary pattern (LBP) technique to represent the object. The major uniform LBP patterns are exploited to form a mask for joint color-texture feature selection. Compared with the traditional color histogram based algorithms that use the whole target region for tracking, the proposed algorithm extracts effectively the edge and corner features in the target region, which characterize better and represent more robustly the target. The experimental results validate that the proposed method improves greatly the tracking accuracy and efficiency with fewer mean shift iterations than standard mean shift tracking. It can robustly track the target under complex scenes, such as similar target and background appearance, on which the traditional color based schemes may fail to track.
[ "object tracking", "mean shift", "color histogram", "local binary pattern" ]
[ "P", "P", "P", "P" ]
2fmuMdM
Quasi-Resonant Interconnects: A Low Power, Low Latency Design Methodology
Design and analysis guidelines for quasi-resonant interconnect networks (QRN) are presented in this paper. The methodology focuses on developing an accurate analytic distributed model of the on-chip interconnect and inductor to obtain both low power and low latency. Excellent agreement is shown between the proposed model and SpectraS simulations. The analysis and design of the inductor, insertion point, and driver resistance for minimum power-delay product is described. A case study demonstrates the design of a quasi-resonant interconnect, transmitting a 5 Gb/s data signal along a 5 mm line in a TSMC 0.18-mu m CMOS technology. As compared to classical repeater insertion, an average reduction of 91.1% and 37.8% is obtained in power consumption and delay, respectively. As compared to optical links, a reduction of 97.1% and 35.6% is observed in power consumption and delay, respectively.
[ "latency", "on-chip interconnects", "on-chip inductors", "power dissipation", "resonance" ]
[ "P", "P", "R", "M", "U" ]
4831Ro4
Combining Hashing and Enciphering Algorithms for Epidemiological Analysis of Gathered Data
Objectives: Compiling individual records coming from different sources is necessary for multi-center studies. Legal aspects can be satisfied by implementing anonymization procedures. When using these procedures with a different key for each study it becomes almost impossible to link records from separate data collections. Methods: The originality of the method relies on the way the combination of hashing and enciphering techniques is performed: like in asymmetric encryption, two keys are used but the private key depends on the patient's identity. Results:The combination of hashing and enciphering techniques provides a great improvement in the overall security of the proposed scheme. Conclusion: This methodology makes stored data available for use in the field of public health, while respecting legal security requirements.
[ "hashing", "encryption", "security", "patient identification" ]
[ "P", "P", "P", "M" ]
2vWSaAc
A personalized English learning recommender system for ESL students
This paper has developed an online personalized English learning recommender system capable of providing ESL students with reading lessons that suit their different interests and therefore increase the motivation to learn. The system, using content-based analysis, collaborative filtering, and data mining techniques, analyzes real students reading data and generates recommender scores, based on which to help select appropriate lessons for respective students. Its performance having been tracked over a period of one year, this recommender system has proved to be very useful in heightening ESL learners motivation and interest in reading.
[ "recommender system", "esl", "data mining", "online learning", "learning system", "association rules", "clustering" ]
[ "P", "P", "P", "R", "R", "U", "U" ]
xFi8L5Y
Graph-based hierarchical conceptual clustering
Hierarchical conceptual clustering has proven to be a useful, although under-explored, data mining technique. A graph-based representation of structural information combined with a substructure discovery technique has been shown to be successful in knowledge discovery. The SUBDUE substructure discovery system provides one such combination of approaches. This work presents SUBDUE and the development of its clustering functionalities. Several examples are used to illustrate the validity of the approach both in structured and unstructured domains, as well as to compare SUBDUE to the Cobweb clustering algorithm. We also develop a new metric for comparing structurally-defined clusterings. Results show that SUBDUE successfully discovers hierarchical clusterings in both structured and unstructured data.
[ "clustering", "cluster analysis", "concept formation", "structural data", "graph match" ]
[ "P", "M", "U", "R", "U" ]
2PPwT7k
An intelligent system employing an enhanced fuzzy c-means clustering model: Application in the case of forest fires
Fuzzy c-means is a well-established clustering algorithm. According to this approach instead of having each data point Dpi=(X,Y) belonging only to a specific cluster in a crisp manner, each Dpi belongs to all of the determined clusters with a different degree of membership. In this way cluster overlapping is allowed. This research effort enhances the fuzzy c-means model in an intelligent manner, employing a flexible fuzzy termination criterion. The enhanced fuzzy c-means clustering algorithm performs several iterations before the proper centers of the clusters more or less stabilize, which means that their coordinates remain almost equal to the previous ones. In this way the algorithm is expanded to perform in a more flexible and human like intelligent way, avoiding the chance of infinite loops and the performance of unnecessary iterations. A corresponding software system has been developed in C++ programming language applying the extended model. The system has been applied for the clustering of the Greek forest departments according to their forest fire risk. Two risk factors were taken into consideration, namely the number of forest fires and the annual burned forested areas. The design and the development of the innovative model-system and the results of its application are presented and discussed in this research paper.
[ "forest fires", "extended fuzzy c-means clustering", "innovative fuzzy termination criterion", "forest fire risk clustering" ]
[ "P", "R", "R", "R" ]
4bayWMP
Miniaturization of UWB Antennas and its Influence on Antenna-Transceiver Performance in Impulse-UWB Communication
In this paper, a co-design methodology and the effect of antenna miniaturization in an impulse UWB system/transceiver is presented. Modified small-size printed tapered monopole antennas (PTMA) are designed in different scaling sizes. In order to evaluate the performance and functionality of these antennas, the effect of each antenna is studied in a given impulse UWB system. The UWB system includes an impulse UWB transmitter and two kinds of UWB receivers are considered, one based on correlation detection and one on energy detection schemes. A tunable low-power Impulse UWB transmitter is designed and the benefit of co-designing it with the PTMA antenna is investigated for the 3.110.6GHz band. A comparison is given between a 50\(\Omega \) design and a co-designed version. Our antenna/transceiver co-design methodology shows improvement in both transmitter efficiency and whole system performance. The simulation results show that the PTMA antenna and its miniaturized geometries are suitable for UWB applications.
[ "uwb antennas", "transceiver", "design methodology", "impulse radio", "ultra-wideband" ]
[ "P", "P", "R", "M", "U" ]
-eDwsKq
INDUCED QUASI-ARITHMETIC UNCERTAIN LINGUISTIC AGGREGATION OPERATOR
Induced quasi-arithmetic aggregation operators are considered to aggregate uncertain linguistic information by using order inducing variables. We introduce the induced correlative uncertain linguistic aggregation operator with Choquet integral and we also present the induced uncertain linguistic aggregation operator by using the Dempster-Shafer theory of evidence. The special cases of the new proposed operators are investigated. Many existing linguistic aggregation operators are special cases of our new operators and more new uncertain linguistic aggregation operators can be derived from them. Decision making methods based on the new aggregation operators are proposed and architecture material supplier selection problems are presented to illustrate the feasibility and efficiency of the new methods.
[ "aggregation operator", "choquet integral", "dempster-shafer theory", "decision making", "uncertain linguistic variable" ]
[ "P", "P", "P", "P", "R" ]
1&8WzqS
On fuzzy congruence of a near-ring module
The aim of this paper is to introduce fuzzy submodule and fuzzy congruence of an R-module (Near-ring module), to obtain the correspondence between fuzzy congruences and fuzzy submodules of an R-module, to define quotient R-module of an R-module over a fuzzy submodule and to obtain correspondence between fuzzy congruences of an R-module and fuzzy congruences of quotient R-module over a fuzzy submodule of an R-module. (C) 2000 Elsevier Science B.V. All rights reserved.
[ "fuzzy congruence", "fuzzy submodule", "r-module", "algebra", "quotient module" ]
[ "P", "P", "P", "U", "R" ]
2UXWQX-
Self-bounded controlled invariant subspaces in measurable signal decoupling with stability: Minimal-order feedforward solution
The structural properties of self-bounded controlled invariant subspaces are fundamental to the synthesis of a dynamic feedforward compensator achieving insensitivity of the controlled output to a disturbance input accessible for measurement, on the assumption that the system is stable or pre-stabilized by an inner feedback. The control system herein devised has several important features: i) minimum order of the feedforward compensator; ii) minimum number of unassignable dynamics internal to the feedforward compensator; iii) maximum number of dynamics, external to the feedforward compensator, arbitrarily assignable by a possible inner feedback. From the numerical point of view, the design method herein detailed does not involve any computation of eigenspaces, which may be critical for systems of high order. The procedure is first presented for left-invertible systems. Then, it is extended to non-left-invertible systems by means of a simple, original, squaring-down technique.
[ "self-bounded controlled invariant subspaces", "measurable signal decoupling", "non-left-invertible systems", "geometric approach", "linear systems" ]
[ "P", "P", "P", "U", "M" ]
PyLn9S6
hypergraph-based multilevel matrix approximation for text information retrieval
In Latent Semantic Indexing (LSI), a collection of documents is often pre-processed to form a sparse term-document matrix, followed by a computation of a low-rank approximation to the data matrix. A multilevel framework based on hypergraph coarsening is presented which exploits the hypergraph that is canonically associated with the sparse term-document matrix representing the data. The main goal is to reduce the cost of the matrix approximation without sacrificing accuracy. Because coarsening by multilevel hypergraph techniques is a form of clustering, the proposed approach can be regarded as a hybrid of factorization-based LSI and clustering-based LSI. Experimental results indicate that our method achieves good improvement of the retrieval performance at a reduced cost
[ "text information retrieval", "latent semantic indexing", "multilevel hypergraph partitioning", "low-rank matrix approximation" ]
[ "P", "P", "M", "R" ]
-&STTsJ
Balanced paths in acyclic networks: Tractable cases and related approaches
Given a weighted acyclic network G and two nodes s and t in G, we consider the problem of computing k balanced paths from s to t, that is, k paths such that the difference in cost between the longest and the shortest path is minimized. The problem has several variants. We show that, whereas the general problem is solvable in pseudopolynomial time, both the arc-disjoint and the node-disjoint variants (i.e., the variants where the k paths are required to be arc-disjoint and node-disjoint, respectively) are strongly NP-Hard. We then address some significant special cases of such variants, and propose exact as well as approximate algorithms for their solution. The proposed approaches are also able to solve versions of the problem in which k origin-destination pairs are provided, and a set of k paths linking the origin-destination pairs has to be computed in such a way to minimize the difference in cost between the longest and the shortest path in the set. (C) 2005 Wiley Periodicals, Inc. NETWORKS, Vol. 45(2), 104-111 2005
[ "balanced paths", "layered networks", "cost difference", "pseudopolynomial approaches" ]
[ "P", "M", "R", "R" ]
4ypgRPa
The ?-connected assignment problem
Given a graph and costs of assigning to each vertex one of k different colors, we want to find a minimum cost assignment such that no color q induces a subgraph with more than a given number (?q) of connected components. This problem arose in the context of contiguity-constrained clustering, but also has a number of other possible applications. We show the problem to be NP-hard. Nevertheless, we derive a dynamic programming algorithm that proves the case where the underlying graph is a tree to be solvable in polynomial time. Next, we propose mixed-integer programming formulations for this problem that lead to branch-and-cut and branch-and-price algorithms. Finally, we introduce a new class of valid inequalities to obtain an enhanced branch-and-cut. Extensive computational experiments are reported.
[ "assignment", "clustering", "cutting", "pricing", "integer programming" ]
[ "P", "P", "U", "U", "M" ]
h6mB5Uw
Stable Spaces for Real-time Clothing
We present a technique for learning clothing models that enables the simultaneous animation of thousands of detailed garments in real-time. This surprisingly simple conditional model learns and preserves the key dynamic properties of a cloth motion along with folding details. Our approach requires no a priori physical model, but rather treats training data as a "black box." We show that the models learned with our method are stable over large time-steps and can approximately resolve cloth-body collisions. We also show that within a class of methods, no simpler model covers the full range of cloth dynamics captured by ours. Our method bridges the current gap between skinning and physical simulation, combining benefits of speed from the former with dynamic effects from the latter. We demonstrate our approach on a variety of apparel worn by male and female human characters performing a varied set of motions typically used in video games (e.g., walking, running, jumping, etc.).
[ "video games", "cloth animation", "character animation", "virtual reality", "cloth simulation" ]
[ "P", "R", "R", "U", "R" ]
1h&74Q7
using topes to validate and reformat data in end-user programming tools
End-user programming tools offer no data types except "string" for many categories of data, such as person names and street addresses. Consequently, these tools cannot automatically validate or reformat these data. To address this problem, we have developed a user-extensible model for string-like data. Each "tope" in this model is a user-defined abstraction that guides the interpretation of strings as a particular kind of data. Specifically, each tope implementation contains software functions for recognizing and reformatting instances of that tope's kind of data. This makes it possible at runtime to distinguish between invalid data, valid data, and questionable data that could be valid or invalid. Once identified, questionable and/or invalid data can be double-checked and possibly corrected, thereby increasing the overall reliability of the data. Valid data can be automatically reformatted to any of the formats appropriate for that kind of data. To show the general applicability of topes, we describe new features that topes have enabled us to provide in four tools.
[ "validation", "data", "end-user programming", "abstraction", "web macros", "web applications", "spreadsheets", "end-user software engineering" ]
[ "P", "P", "P", "P", "U", "M", "U", "M" ]
1gFmSD-
Rough Sets and the role of the monetary policy in financial stability (macroeconomic problem) and the prediction of insolvency in insurance sector (microeconomic problem)
This paper faces two questions related with financial stability. The first one is a macroeconomic problem in which we try to further investigate the role of monetary policy in explaining banking sector fragility and, ultimately, systemic banking crisis. It analyses a large sample of countries in the period 19811999. We find that the degree of central bank independence is one of the key variables to explain financial crisis. However, the effects of the degree of independence are not linear. Surprisingly, either a high degree of independence or a high degree of dependence are compatible with a situation of financial stability, while intermediate levels of independence are more likely associated with financial crisis. It seems that it is the uncertainty related with a non-clear allocation of monetary policy responsibilities that contributes to financial crisis episodes. The second one is a microeconomic problem: the prediction of insolvency in insurance companies. This question has been a concern of several parties stemmed from the perceived need to protect general public and to minimize the costs associated such as the effects on state insurance guaranty funds or the responsibilities for management and auditors. We have developed a bankruptcy prediction model for Spanish non-life insurance companies and the results obtained are very encouraging in comparison with previous analysis. This model could be used as an early warning system for supervisors in charge of the soundness of these entities and/or in charge of the financial system stability. Most methods applied in the past to tackle these two problems are techniques of statistical nature and, variables employed in these models do not usually satisfy statistical assumptions what complicates the analysis. We propose an approach to undertake these questions based on Rough Set Theory.
[ "rough sets", "financial stability", "insolvency", "central bank independence", "insurance companies" ]
[ "P", "P", "P", "P", "P" ]
4pn:ArV
CLASSIFICATION OF SELF-DUAL CODES OF LENGTH 36
A complete classification of binary self-dual codes of length 36 is given.
[ "self-dual code", "weight enumerator", "mass formula" ]
[ "P", "U", "U" ]
FqVHV3j
Supporting pervasive computing applications with active context fusion and semantic context delivery
Future pervasive computing applications are envisioned to adapt the applications behaviors by utilizing various contexts of an environment and its users. Such context information may often be ambiguous and also heterogeneous, which make the delivery of unambiguous context information to real applications extremely challenging. Thus, a significant challenge facing the development of realistic and deployable context-aware services for pervasive computing applications is the ability to deal with these ambiguous contexts. In this paper, we propose a resource optimized quality assured context mediation framework based on efficient context-aware data fusion and semantic-based context delivery. In this framework, contexts are first fused by an active fusion technique based on Dynamic Bayesian Networks and ontology, and further mediated using a composable ontological rule-based model with the involvement of users or application developers. The fused context data are then organized into an ontology-based semantic network together with the associated ontologies in order to facilitate efficient context delivery. Experimental results using SunSPOT and other sensors demonstrate the promise of this approach.
[ "pervasive computing", "context fusion", "bayesian networks", "ontology", "sunspot", "context awareness" ]
[ "P", "P", "P", "P", "P", "M" ]
1nKNXca
On computing the minimum 3-path vertex cover and dissociation number of graphs
The dissociation number of a graph G is the number of vertices in a maximum size induced subgraph of G with vertex degree at most 1. A k-path vertex cover of a graph G is a subset S of vertices of G such that every path of order k in G contains at least one vertex from S. The minimum 3-path vertex cover is a dual problem to the dissociation number. For this problem, we present an exact algorithm with a running time of O*(1.5171(n)) on a graph with n vertices. We also provide a polynomial time randomized approximation algorithm with an expected approximation ratio of 23/11 for the minimum 3-path vertex cover. (C) 2011 Elsevier B.V. All rights reserved.
[ "dissociation number", "approximation", "path vertex cover" ]
[ "P", "P", "R" ]
-sDLz2X
Interval multiplicative transitivity for consistency, missing values and priority weights of interval fuzzy preference relations
In this paper, the concept of multiplicative transitivity of a fuzzy preference relation, as defined by Tanino [T. Tanino, Fuzzy preference orderings in group decision-making, Fuzzy Sets and Systems 12 (1984) 117131], is extended to discover whether an interval fuzzy preference relation is consistent or not, and to derive the priority vector of a consistent interval fuzzy preference relation. We achieve this by introducing the concept of interval multiplicative transitivity of an interval fuzzy preference relation and show that, by solving numerical examples, the test of consistency and the weights derived by the simple formulas based on the interval multiplicative transitivity produce the same results as those of linear programming models proposed by Xu and Chen [Z.S. Xu, J. Chen, Some models for deriving the priority weights from interval fuzzy preference relations, European Journal of Operational Research 184 (2008) 266280]. In addition, by taking advantage of interval multiplicative transitivity of an interval fuzzy preference relation, we put forward two approaches to estimate missing value(s) of an incomplete interval fuzzy preference relation, and present numerical examples to illustrate these two approaches.
[ "interval multiplicative transitivity", "consistency", "missing values", "interval fuzzy preference relation", "priority vector" ]
[ "P", "P", "P", "P", "P" ]
317p8sR
An O(n log n) algorithm for finding a shortest central link segment
A central link segment of a simple n-vertex polygon P is a segment s inside P that minimizes the quantity max(x epsilon P) min(y epsilon s) d(L)(x, y), where d(L)(x, y) is the link distance between points a: and y of P. In this paper we present an O(n log n) algorithm for finding a central link segment of P. This generalizes previous results for finding an edge or a segment of P from which P is visible. Moreover, in the same time bound, our algorithm finds a central link segment of minimum length. Constructing a central link segment has applications to the problems of finding an optimal robot placement in a simply connected polygonal region and determining the minimum value k for which a given polygon is k-visible from some segment.
[ "link distance", "algorithm design and analysis", "computational geometry", "simple polygon", "shortest segment" ]
[ "P", "M", "U", "R", "R" ]
46S3sas
Deconstructing switch-reference
This paper develops a new view on switch-reference, a phenomenon commonly taken to involve a morphological marker on a verb indicating whether the subject of this verb is coreferent with or disjoint from the subject of another verb. Ipropose a new structural source of switch-reference marking, which centers around coordination at different heights of the clausal structure, coupled with distinct morphological realizations of the syntactic coordination head. Conjunction of two VPs has two independent consequences: First, only a single external argument is projected; second, the coordinator head is realized by some marker A (the same subject marker). Conjunction of two vPs, by contrast, leads to projection of two independent external arguments and a different realization of the coordination by a marker B (the different subject marker). The hallmark properties of this analysis are that (i)subject identity or disjointness is only indirectly tied to the switch-reference markers, furnishing a straightforward account of cases where this correlation breaks down; (ii)switch-reference does not operate across fully developed clauses, which accounts for the widely observed featural defectiveness of switch-reference clauses; (iii)same subject and different subject constructions differ in their syntactic structure, thus accommodating cases where the choice of the switch-reference markers has an impact on event structure. The analysis is mainly developed on the basis of evidence from the Mexican language Seri, the Papuan language Amele, and the North-American language Kiowa.
[ "coordination", "clause linkage", "reference tracking", "distributed morphology", "event semantics", "verbal projections" ]
[ "P", "M", "U", "M", "M", "M" ]
-QcRoRR
An optimized parallel LSQR algorithm for seismic tomography
The LSQR algorithm developed by Paige and Saunders (1982) is considered one of the most efficient and stable methods for solving large, sparse, and ill-posed linear (or linearized) systems. In seismic tomography, the LSQR method has been widely used in solving linearized inversion problems. As the amount of seismic observations increase and tomographic techniques advance, the size of inversion problems can grow accordingly. Currently, a few parallel LSQR solvers are presented or available for solving large problems on supercomputers, but the scalabilities are generally weak because of the significant communication cost among processors. In this paper, we present the details of our optimizations on the LSQR code for, but not limited to, seismic tomographic inversions. The optimizations we have implemented to our LSQR code include: reordering the damping matrix to reduce its band-width for simplifying the communication pattern and reducing the amount of communication during calculations; adopting sparse matrix storage formats for efficiently storing and partitioning matrices; using the MPI I/O functions to parallelize the date reading and result writing processes; providing different data partition strategies for efficiently using computational resources. A large seismic tomographic inversion problem, the full-3D waveform tomography for Southern California, is used to explain the details of our optimizations and examine the performance on Yellowstone supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC). The results showed that the required wall time of our code for the same inversion problem is much less than that of the LSQR solver from the PETSc library (Balay et al., 1997).
[ "lsqr algorithm", "inverse problems", "tomographic inversion", "mpi", "computational seismology", "parallel scientific computing" ]
[ "P", "P", "P", "P", "M", "M" ]
CcB9FJU
on computer-assisted classification of coupled integrable equations
We show how the triangularization method of the second author can be successfully applied to the problem of classification of homogeneous coupled integrable equations. The classifications rely on the recent algorithm developed by the first author that requires solving 17 systems of polynomial equations. We show that these systems can be completely resolved in the case of coupled Korteweg-de Vries, Sawada-Kotera and Kaup-Kupershmidttype equations.
[ "generalized symmetries", "integrable pdes", "polynomial systems", "triangular decompositions", "mathematical physics" ]
[ "U", "M", "R", "M", "U" ]