id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
hDNKFs3
A novel method for fingerprint verification that approaches the problem as a two-class pattern recognition problem
We present a system for fingerprint verification that approaches the problem as a two-class pattern recognition problem. The distances of the test fingerprint to the reference fingerprints are normalized by the corresponding mean values obtained from the reference set, to form a five-dimensional feature vector. This feature vector is then projected onto a one-dimensional Karhunen-Loeve space and then classified into one of the two classes (genuine or impostor).
[ "fingerprint verification", "support vector machine" ]
[ "P", "M" ]
1KxVAUQ
The uncovering of hidden structures by Latent Semantic Analysis
Latent Semantic Analysis (LSA) is a well-known method for information retrieval. It has also been applied as a model of cognitive processing and word-meaning acquisition. This dual importance of LSA derives from its capacity to modulate the meaning of words by contexts, dealing successfully with polysemy and synonymy. The underlying reasons that make the method work are not clear enough. We propose that the method works because it detects an underlying block structure (the blocks corresponding to topics) in the term-by-document matrix. In real cases this block structure is hidden because of perturbations. We propose that the correct explanation for LSA must be searched in the structure of singular vectors rather than in the profile of singular values. Using the PerronFrobenius theory we show that the presence of disjoint blocks of documents is marked by sign-homogeneous entries in the vectors corresponding to the documents of one block and zeros elsewhere. In the case of nearly disjoint blocks, perturbation theory shows that if the perturbations are small, the zeros in the leading vectors are replaced by small numbers (pseudo-zeros). Since the singular values of each block might be very different in magnitude, their order does not mirror the order of blocks. When the norms of the blocks are similar, LSA works fine, but we propose that when the topics have different sizes, the usual procedure of selecting the first k singular triplets (k being the number of blocks) should be replaced by a method that selects the perturbed Perron vectors for each block.
[ "lsa", "perronfrobenius theory", "perturbation theory", "information search and retrieval" ]
[ "P", "P", "P", "R" ]
-nshji8
computing monodromy groups defined by plane algebraic curves
We present a symbolic-numeric method to compute the monodromy group of a plane algebraic curve viewed as a ramified covering space of the complex plane. Following the definition, our algorithm is based on analytic continuation of algebraic functions above paths in the complex plane. Our contribution is three-fold : first of all, we show how to use a minimum spanning tree to minimize the length of paths ; then, we propose a strategy that gives a good compromise between the number of steps and the truncation orders of Puiseux expansions, obtaining for the first time a complexity result about the number of steps; finally, we present an efficient numerical-modular algorithm to compute Puiseux expansions above critical points,which is a non trivial task.
[ "monodromy", "algebraic curves", "riemann surfaces", "symbolic-numeric computation" ]
[ "P", "P", "U", "R" ]
1CC37p4
Stone-like representation theorems and three-valued filters in R-0- algebras (nilpotent minimum algebras)
Nilpotent minimum algebras (NM-algebras) are algebraic counterpart of a formal deductive system where conjunction is modeled by the nilpotent minimum t-norm, a logic also independently introduced by Guo-Jun Wang in the mid 1990s. Such algebras are to this logic just what Boolean algebras are to the classical propositional logic. In this paper, by introducing respectively the Stone topology and a three-valued fuzzy Stone topology on the set of all maximal filters in an NM-algebra, we first establish two analogues for an NM-algebra of the well-known Stone representation theorem for a Boolean algebra, which state that the Boolean skeleton of an NM-algebra is isomorphic to the algebra of all clopen subsets of its Stone space and the three-valued skeleton is isomorphic to the algebra of all clopen fuzzy subsets of its three-valued fuzzy Stone space, respectively. Then we introduce the notions of Boolean filter and of three-valued filter in an NM-algebra, and finally we prove that three-valued filters and closed subsets of the Stone space of an NM-algebra are in one-to-one correspondence and Boolean filters uniquely correspond to closed subsets of the subspace consisting of all ultrafilters. (C) 2010 Elsevier B.V. All rights reserved.
[ "nilpotent minimum", "maximal filter", "stone representation theorem", "non-classical logics", "finite square intersection property", "prime ideal theorem" ]
[ "P", "P", "P", "M", "U", "M" ]
3GSuG3A
An adaptive learning scheme for load balancing with zone partition in multi-sink wireless sensor network
In many researches on load balancing in multi-sink WSN, sensors usually choose the nearest sink as destination for sending data. However, in WSN, events often occur in specific area. If all sensors in this area all follow the nearest-sink strategy, sensors around nearest sink called hotspot will exhaust energy early. It means that this sink is isolated from network early and numbers of routing paths are broken. In this paper, we propose an adaptive learning scheme for load balancing scheme in multi-sink WSN. The agent in a centralized mobile anchor with directional antenna is introduced to adaptively partition the network into several zones according to the residual energy of hotspots around sink nodes. In addition, machine learning is applied to the mobile anchor to make it adaptable to any traffic pattern. Through interactions with the environment, the agent can discovery a near-optimal control policy for movement of mobile anchor. The policy can achieve minimization of residual energys variance among sinks, which prevent the early isolation of sink and prolong the network lifetime.
[ "adaptive learning", "load balancing", "multi-sink wireless sensor network", "reinforcement learning problem", "q-learning based adaptive zone partition scheme" ]
[ "P", "P", "P", "M", "M" ]
-3eWjEu
interactive visual tools to explore spatio-temporal variation
CommonGIS is a developing software system for exploratory analysis of spatial data. It includes a multitude of tools applicable to different data types and helping an analyst to find answers to a variety of questions. CommonGIS has been recently extended to support exploration of spatio-temporal data, i.e. temporally variant data referring to spatial locations. The set of new tools includes animated thematic maps, map series, value flow maps, time graphs, and dynamic transformations of the data. We demonstrate the use of the new tools by considering different analytical questions arising in the course of analysis of thematic spatio-temporal data.
[ "animated maps", "temporal variation", "time-series spatial data", "information visualisation", "time-series analysis", "exploratory data analysis" ]
[ "R", "R", "M", "U", "M", "R" ]
3&YgieW
Multiprocessor system-on-chip (MPSoC) technology
The multiprocessor system-on-chip (MPSoC) uses multiple CPUs along with other hardware subsystems to implement a system. A wide range of MPSoC architectures have been developed over the past decade. This paper surveys the history of MPSoCs to argue that they represent an important and distinct category of computer architecture. We consider some of the technological trends that have driven the design of MPSoCs. We also survey computer-aided design problems relevant to the design of MPSoCs.
[ "multiprocessor", "multiprocessor system-on-chip (mpsoc)", "configurable processors", "encoding", "hardware/software codesign" ]
[ "P", "P", "U", "U", "M" ]
1E2vvCY
Statistical behavior of joint least-square estimation in the phase diversity context
The images recorded by optical telescopes are often degraded by aberrations that induce phase variations in the pupil plane. Several wavefront sensing techniques have been proposed to estimate aberrated phases. One of them is phase diversity, for which the joint least-square approach introduced by Gonsalves et al. is a reference method to estimate phase coefficients from the recorded images. In this paper, we rely on the asymptotic theory of Toeplitz matrices to show that Gonsalves' technique provides a consistent phase estimator as the size of the images grows. No comparable result is yielded by the classical joint maximum likelihood interpretation (e.g., as found in the work by Paxman et al.). Finally, our theoretical analysis is illustrated through simulated problems.
[ "statistics", "phase diversity", "toeplitz matrices", "error analysis", "least-squares methods", "optical image processing", "parameter estimation" ]
[ "P", "P", "P", "M", "R", "M", "M" ]
2&z3Ah&
Integrated in silico approaches for the prediction of Ames test mutagenicity
The bacterial reverse mutation assay (Ames test) is a biological assay used to assess the mutagenic potential of chemical compounds. In this paper approaches for the development of an in silico mutagenicity screening tool are described. Three individual in silico models, which cover both structure activity relationship methods (SARs) and quantitative structure activity relationship methods (QSARs), were built using three different modelling techniques: (1) an in-house alert model: which uses SAR approach where alerts are generated based on experts judgements; (2) a kNN approach (k-Nearest Neighbours), which is a QSAR model where a prediction is given based on outcomes of its k chemical neighbours; (3) a naive Bayesian model (NB), which is another QSAR model, where a prediction is derived using a Bayesian formula through preselected identified informative chemical features (e.g., physico-chemical, structural descriptors). These in silico models, were compared against two well-known alert models (DEREK and ToxTree) and also against three different consensus approaches (Categorical Bayesian Integration Approach (CBI), Partial Least Squares Discriminate Analysis (PLS-DA) and simple majority vote approach). By applying these integration methods on the validation sets it was shown that both integration models (PLS-DA and CBI) achieved better performance than any of the individual models or consensus obtained by simple majority rule. In conclusion, the recommendation of this paper is that when obtaining consensus predictions for Ames mutagenicity, approaches like PLS-DA or CBI should be the first choice for the integration as compared to a simple majority vote approach.
[ "ames", "in silico models", "sar", "qsar", "admet" ]
[ "P", "P", "P", "P", "U" ]
1nKihTu
Visualization and clustering of categorical data with probabilistic self-organizing map
This paper introduces a self-organizing map dedicated to clustering, analysis and visualization of categorical data. Usually, when dealing with categorical data, topological maps use an encoding stage: categorical data are changed into numerical vectors and traditional numerical algorithms (SOM) are run. In the present paper, we propose a novel probabilistic formalism of Kohonen map dedicated to categorical data where neurons are represented by probability tables. We do not need to use any coding to encode variables. We evaluate the effectiveness of our model in four examples using real data. Our experiments show that our model provides a good quality of results when dealing with categorical data.
[ "visualization", "probabilistic self-organizing map", "categorical variables", "em algorithm" ]
[ "P", "P", "R", "M" ]
-txroLZ
Stiffness analysis of parallelogram-type parallel manipulators using a strain energy method
Stiffness analysis of a general PTPM using an algebraic method. Result comparison between the proposed method and a finite element analysis method. A new stiffness index relating the stiffness property to the wrench experienced in a task.
[ "stiffness analysis", "parallelogram-type parallel manipulator", "strain energy method", "algebraic method", "stiffness index" ]
[ "P", "P", "P", "P", "P" ]
4Q4gvb6
Simulation of natural and social process interactions - An example from Bronze Age Mesopotamia
New multimodel simulations of Bronze Age Mesopotamian settlement system dynamics, using advanced object-based simulation frameworks, are addressing fine-scale interaction of natural processes (crop growth, hydrology, etc.) and social processes (kinship-driven behaviors, farming and herding practices, etc.) on a daily basis across multi-enerational model runs. Key components of these simulations are representations of initial settlement populations that are demographically and socially plausible, and detailed models of social mechanisms that can produce and maintain realistic textures of social structure and dynamics over time. The simulation engine has broad applicability and is also being used to address modern problems such as agroeconomic sustainability in Southeast Asia. This article describes the simulation framework and presents results of initial studies, highlighting some social system representations.
[ "simulations", "social", "interaction", "multimodel", "agent-based", "holistic", "environment" ]
[ "P", "P", "P", "P", "U", "U", "U" ]
4czo4RA
Newton-Like Dynamics and Forward-Backward Methods for Structured Monotone Inclusions in Hilbert Spaces
In a Hilbert space setting we introduce dynamical systems, which are linked to Newton and LevenbergMarquardt methods. They are intended to solve, by splitting methods, inclusions governed by structured monotone operators M=A+B, where A is a general maximal monotone operator, and B is monotone and locally Lipschitz continuous. Based on the Minty representation of A as a Lipschitz manifold, we show that these dynamics can be formulated as differential systems, which are relevant to the CauchyLipschitz theorem, and involve separately B and the resolvents of A. In the convex subdifferential case, by using Lyapunov asymptotic analysis, we prove a descent minimizing property and weak convergence to equilibria of the trajectories. Time discretization of these dynamics gives algorithms combining Newtons method and forward-backward methods for solving structured monotone inclusions.
[ "monotone inclusions", "newton method", "levenbergmarquardt regularization", "dissipative dynamical systems", "lyapunov analysis", "weak asymptotic convergence", "forward-backward algorithms", "gradient-projection methods" ]
[ "P", "P", "M", "M", "R", "R", "R", "M" ]
3UWobLy
Damage identification of a target substructure with moving load excitation
This paper presents a substructural damage identification approach under moving vehicular loads based on a dynamic response reconstruction technique. The relationship between two sets of time response vectors from the substructure subject to moving loads is formulated with the transmissibility matrix based on impulse response function in the wavelet domain. Only the finite element model of the intact target substructure and the measured dynamic acceleration responses from the target substructure in the damaged state are required. The time-histories of moving loads and interface forces on the substructure are not required in the proposed algorithm. The dynamic response sensitivity-based method is adopted for the substructural damage identification with the local damage modeled as a reduction in the elemental stiffness factor. The adaptive Tikhonov regularization technique is employed to have an improved identification result when noise effect is included in the measurements. Numerical studies on a three-dimensional box-section girder bridge deck subject to a single moving force or a two-axle three-dimensional moving vehicle are conducted to investigate the performance of the proposed substructural damage identification approach. The simulated local damage can be identified with 5% noise in the measured data.
[ "damage identification", "substructure", "moving loads", "response reconstruction", "transmissibility", "wavelet" ]
[ "P", "P", "P", "P", "P", "P" ]
3JdZhCP
randomized parallel communication (preliminary version)
Using a simple finite degree interconnection network among n processors and a straightforward randomized algorithm for packet delivery, it is possible to deliver a set of n packets travelling to unique targets from unique sources in 0( log n ) expected time. The expected delivery time is in other words the depth of the interconnection graph. The b -way shufile networks are examples of such. This represents a crude analysis of the transient response to a sudden but very uniform request load on the network. Variations in the uniformity of the load are also considered. Consider s i packets with randomly chosen targets beginning at a source labelled i . The expected overall delay is then [equation] where the labelling is chosen so that s 1 ?s 2 ?. These ideas can be used to guage the asymptotic efficiency of various synchronous parallel algorithms which use such a randomized communications system. The only important assumption is that variations in the physical transmission time along any connection link are negligible in comparison to the amount of work done at a processor.
[ "randomization", "parallel communication", "parallel", "communication", "version", "use", "interconnect", "interconnection network", "connection", "network", "processor", "randomized algorithm", "timing", "graph", "examples", "analysis", "response", "variation", "label", "delay", "efficiency", "synchronization", "parallel algorithm", "systems", "physical", "linking", "comparisons", "worst case", "average response time" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "M" ]
3znkPCS
feature selection for fast speech emotion recognition
In speech based emotion recognition, both acoustic features extraction and features classification are usually time consuming,which obstruct the system to be real time. In this paper, we proposea novel feature selection (FSalgorithm to filter out the low efficiency features towards fast speech emotion recognition.Firstly, each acoustic feature's discriminative ability, time consumption and redundancy are calculated. Then, we map the original feature space into a nonlinear one to select nonlinear features,which can exploit the underlying relationship among the original features. Thirdly, high discriminative nonlinear feature with low time consumption is initially preserved. Finally, a further selection is followed to obtain low redundant features based on these preserved features. The final selected nonlinear features are used in features' extraction and features' classification in our approach, we call them qualified features. The experimental results demonstrate that recognition time consumption can be dramatically reduced in not only the extraction phase but also the classification phase. Moreover, a competitive of recognition accuracy has been observed in the speech emotion recognition.
[ "feature selection", "emotion recognition", "time consumption", "qualified features", "nonlinear space" ]
[ "P", "P", "P", "P", "R" ]
3otQgdw
Automated inspection planning of free-form shape parts by laser scanning
The inspection operation accounts for a large portion of manufacturing lead time, and its importance in quality control cannot be overemphasized. In recent years, due to the development of laser technology, the accuracy of laser scanners has been improved significantly so that they can be used in a production environment. They are noncontact-type-measuring devices and usually have the scanning speed that is 50100 times faster than that of coordinate measuring machines. This laser-scanning technology provides us a platform that enables us to perform a 100% inspection of complicated shape parts. This research proposes algorithms that lead to the automation of laser scanner-based inspection operations. The proposed algorithms consist of three steps: firstly, all possible accessible directions at each sampled point on a part surface are generated considering constraints existing in a laser scanning operation. The constraints include satisfying the view angle, the depth of view, checking interference with a part, and avoiding collision with the probe. Secondly, the number of scans and the most desired direction for each scan are calculated. Finally, the scan path that gives the minimum scan time is generated. The proposed algorithms are applied to sample parts and the results are discussed.
[ "automated inspection", "laser scanner", "reverse engineering" ]
[ "P", "P", "U" ]
3zxKa9E
GBF: a grammar based filter for Internet applications
Observing network traffic is necessary for achieving different purposes such as system performance, network debugging and/or information security. Observations, as such, are obtained from low-level monitors that may record a large volume of relevant and irrelevant events. Thus adequate filters are needed to pass interesting information only. This work presents a multilayer system, GBF that integrates both packet (low-level) and document (high-level) filters. Actually, the design of GBF is grammar-based so that it relies upon a set of context-free grammars to carry out various processes, specially the document reconstruction process. GBF consists of three layers, acquisition layer, packet filter layer, and reconstruction layer. The performance of the reconstruction process is evaluated in terms of the time consumed during service separation and session separation tasks.
[ "document reconstruction", "packet monitoring", "event filterng", "sniffing", "context free grammar" ]
[ "P", "R", "M", "U", "M" ]
-M2:naS
Enhanced particle swarm optimizer incorporating a weighted particle
This study proposes an enhanced particle swarm optimizer incorporating a weighted particle (EPSOWP) to improve the evolutionary performance for a set of benchmark functions. In conventional particle swarm optimizer (PSO), there are two principal forces to guide the moving direction of each particle. However, if the current particle lies too close to either the personal best particle or the global best particle, the velocity is mainly updated by only one term. As a result, search step becomes smaller and the optimization of the swarm is likely to be trapped into a local optimum. To address this problem, we define a weighted particle for incorporation into the particle swarm optimization. Because the weighted particle has a better opportunity getting closer to the optimal solution than the global best particle during the evolution, the EPSOWP is capable of guiding the swarm to a better direction to search the optimal solution. Simulation results show the effectiveness of the EPSOWP, which outperforms various evolutionary algorithms on a selected set of benchmark functions. Furthermore, the proposed EPSOWP is applied to controller design and parameter identification for an inverted pendulum system as well as parameter learning of neural network for function approximation to show its viability to solve practical design problems.
[ "weighted particle", "particle swarm optimization (pso)", "inverted pendulum system", "neural network", "convergence", "pid controller design" ]
[ "P", "P", "P", "P", "U", "M" ]
3yHXUxP
Media access protocol for a coexisting cognitive femtocell network
Femtocell networks are widely deployed to extend cellular network coverage into indoor environments such as large office spaces and homes. Cognitive radio functionality can be implemented in femtocell networks based on an overlay mechanism under the assumption of a hierarchical access scenario. This study introduces a novel femtocell network architecture, that is characterized by a completely autonomous femtocell bandwidth access and a distributed media access control protocol for supporting data and real-time traffic. The detailed description of the architecture and media access protocol is presented. Furthermore, in-depth theoretical analysis is performed on the proposed media access protocol using discrete-time Markov chain modeling to validate the effectiveness of the proposed protocol and architecture.
[ "femtocell network", "media access control", "cognitive radio network", "dynamic spectrum access" ]
[ "P", "P", "R", "M" ]
4r3Y7Lq
Integrating computer animation and multimedia
Multimedia provides an immensely powerful tool for the dissemination of both information and entertainment. Current multimedia presentations consist of synchronised excerpts of media (such as sound, video gi text) which are coordinated by an author to ensure a clear narrative is presented to the audience. However each of the segments of the presentation consist of previously recorded footage, only the timing and synchronisation are dynamically constructed. The next logical advance for such systems is therefore to include the capability of generating material 'on-the-fly' in response to the actions of the audience. This paper describes a mechanism for using computer animation to generate this interactive material. Unlike previous animation techniques the approach presented here is suitable for use in constructing a storyline which the author can control, but the user can influence. In order to allow such techniques to be used we also present a multimedia authoring gr playback system which incorporates interactive animation with existing media.
[ "computer animation", "multimedia", "keyframing" ]
[ "P", "P", "U" ]
1:YEaeR
an ontology for supporting communities of practice
In the context of the Palette project aimed at enhancingallindividual and organizational learning in Communities of Practice (CoPs), we are developing Knowledge Management (KM) services. Our approach is based on an ontology dedicated to CoPs and built from analysis of information sources about eleven CoPs available in Palette project. This ontology aims both at modeling the members of the CoP and at annotating the CoP knowledge resources. The paper describes our method for building this ontology, its structure and contents and it analyses our experience feedback from the cooperative building of this ontology.
[ "ontology", "community of practice", "knowledge management" ]
[ "P", "P", "P" ]
1f767:Q
A set of neural tools for human-computer interactions: Application to the handwritten character recognition, and visual speech recognition problems
This paper presents a new technique of data coding and an associated set of homogenous processing tools for the development of Human Computer Interactions (HCI). The proposed technique facilitates the fusion of different sensorial modalities and simplifies the implementations. The coding takes into account the spatio-temporal nature of the signals to be processed in the framework of a sparse representation of data. Neural networks adapted to such a representation of data are proposed to perform the recognition tasks. Their development is illustrated by two examples: one of on-line handwritten character recognition; and the other of visual speech recognition.
[ "visual speech recognition", "on-line handwritten character recognition", "human-machine interaction", "lipreading", "spatio-temporal coding", "spatio-temporal neural networks", "spatio-temporal patterns", "spiking neurons" ]
[ "P", "P", "M", "U", "R", "R", "M", "U" ]
1FRBh7j
impact of sub-optimal checkpoint intervals on application efficiency in computational clusters
As computational clusters rapidly grow in both size and complexity, system reliability and, in particular, application resilience have become increasingly important factors to consider in maintaining efficiency and providing improved computational performance over predecessor systems. One commonly used mechanism for providing application fault tolerance in parallel systems is the use of checkpointing. By making use of a multi-cluster simulator, we study the impact of sub-optimal checkpoint intervals on overall application efficiency. By using a model of a 1926 node cluster and workload statistics from Los Alamos National Laboratory to parameterize the simulator, we find that dramatically overestimating the AMTTI has a fairly minor impact on application efficiency while potentially having a much more severe impact on user-centric performance metrics such a queueing delay. We compare and contrast these results with the trends predicted by an analytical model.
[ "checkpointing", "resilience", "simulation", "prediction" ]
[ "P", "P", "P", "P" ]
-NWt62B
An approach to a content-based retrieval of multimedia data
This paper presents a data model tailored for multimedia data representation, along with the main characteristics of a Multimedia Query Language that exploits the features of the proposed model. The model addresses data presentation, manipulation and content-based retrieval. It consists of three parts: a Multimedia Description Model, which provides a structural view of raw multimedia data, a Multimedia Presentation Model, and a Multimedia Interpretation Model which allows semantic information to be associated with multimedia data. The paper focuses on the structuring of a multimedia data model which provides support for content-based retrieval of multimedia data. The Query Language is an extension of a traditional query language which allows restrictions to be expressed on features, concepts, and the structural aspects of the objects of multimedia data and the formulation of queries with imprecise conditions. The result of a query is an approximate set of database objects which partially match such a query.
[ "data modeling", "multimedia information systems", "information storage and retrieval" ]
[ "P", "M", "M" ]
3zPwDZy
Monte Carlo EM with importance reweighting and its applications in random effects models1
In this paper we propose a new Monte Carlo EM algorithm to compute maximum likelihood estimates in the context of random effects models. The algorithm involves the construction of efficient sampling distributions for the Monte Carlo implementation of the E-step, together with a reweighting procedure that allows repeatedly using a same sample of random effects. In addition, we explore the use of stochastic approximations to speed up convergence once stability has been reached. Our algorithm is compared with that of McCulloch (1997). Extensions to more general problems are discussed.
[ "stochastic approximations", "importance sampling", "metropolishastings algorithm" ]
[ "P", "R", "M" ]
-YKore&
A perceptual approach for stereoscopic rendering optimization
The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately; which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering.
[ "perception", "stereoscopic rendering", "binocular suppression", "binocular vision" ]
[ "P", "P", "P", "M" ]
4Phey9U
using traditional loop unrolling to fit application on a new hybrid reconfigurable architecture
This paper presents a strategy to modify a sequential implementation of an H.264/AVC motion estimation to run on a new reconfigurable architecture called RoSA. The modifications aim to provide more parallelism that will be exploited by the architecture. In the strategy presented in this paper we used traditional loop unrolling and profile information as techniques to modify the application and to generate a best fit solution to RoSA architecture.
[ "reconfigurable architecture", "stream-based", "optimization", "performance" ]
[ "P", "U", "U", "U" ]
jvk-Gdf
Evaluating fluid semantics for passive stochastic process algebra cooperation
Fluid modelling is a next-generation technique for analysing massive performance models. Passive cooperation is a popular cooperation mechanism frequently used by performance engineers. Therefore having an accurate translation of passive cooperation into a fluid model is of direct practical application. We compare different existing styles of fluid model translations of passive cooperation in a stochastic process algebra and show how the previous model can be improved upon significantly. We evaluate the new passive cooperation fluid semantics and show that the first-order fluid model is a good approximation to the dynamics of the underlying continuous-time Markov chain. We show that in a family of possible translations to the fluid model, there is an optimal translation which can be expected to introduce least error. Finally, we use these new techniques to show how the scalability of a passively-cooperating distributed software architecture could be assessed.
[ "stochastic process algebra", "passive cooperation", "fluid approximation" ]
[ "P", "P", "R" ]
:&uGFto
using new media to improve self-help for clients and staff
One of the most common frustrations for any person looking for technical support is actually finding effective technical support. Even if a solution seems clear, it can be misunderstood if the vernacular is not just right. A large part of a successful support call involves being able to determine the actual problem based on the information the client provides. Help desk analysts must have the ability to translate "non-tech" descriptions to identify a problem in technical terms and then communicate a solution using vernacular the client can understand. This process is always a little different. If we aim to be successful analysts, we must speak different "languages" in order to help our clients. Based on this logic, it stands to reason that our self-help documentation must do the same. Providing a variety of methods to get self-help ensures a message will be received by a wider audience. In the world of modern media, audiences are presented with many ways to consume information. This ensures the message is heard by the most people in a manner that is the most appealing and the most clear. New methods of consuming information have become possible as the face of mainstream media has become democratized over the last few years. This is thanks largely to the fact that the tools needed to create and distribute content have become affordable and readily available to anyone with a bit of technical skill. Anyone with a laptop, a webcam and a little imagination can and do create content. Considering all of this, we asked ourselves, "Why shouldn't we?." We have found that creating content in new media is relatively easy and fun. Finding and creating new methods to deliver content positively engages and challenges our help desk team. Thinking about how to best use new media requires help desk analysts to rethink otherwise standardized and mundane processes and create fresh perspectives. The creation and production of new media establishes stronger ownership of procedures and process. We would like to share the following from our ongoing experiences with new media at our help desk: General issues we see with clients finding help How creating new media creates stronger ownership and morale with staff Expanding the technical skills of help desk staff How using new media improves our client experience Casting a wider net (ensuring a message gets to the most people) How we use new media and what we have done with it How to make your own video podcast in 1,345 easy steps!
[ "new media", "self-help", "video podcast", "team building", "client support" ]
[ "P", "P", "P", "M", "R" ]
3J&p66Y
A framework for preservation of cloud users data privacy using dynamic reconstruction of metadata
In the rising paradigm of cloud computing, attainment of sustainable levels of cloud users trust in using cloud services is directly dependent on effective mitigation of its associated impending risks and resultant security threats. Among the various indispensible security services required to ensure effective cloud functionality leading to enhancement of users confidence in using cloud offerings, those related to the preservation of cloud users data privacy are significantly important and must be matured enough to withstand the imminent security threats, as emphasized in this research paper. This paper highlights the possibility of exploiting the metadata stored in cloud's database in order to compromise the privacy of users data items stored using a cloud provider's simple storage service. It, then, proposes a framework based on database schema redesign and dynamic reconstruction of metadata for the preservation of cloud users data privacy. Using the sensitivity parameterization parent class membership of cloud database attributes, the database schema is modified using cryptographic as well as relational privacy preservation operations. At the same time, unaltered access to database files is ensured for the cloud provider using dynamic reconstruction of metadata for the restoration of original database schema, when required. The suitability of the proposed technique with respect to private cloud environments is ensured by keeping the formulation of its constituent steps well aligned with the recommendations proposed by various Standards Development Organizations working in this domain.
[ "privacy", "metadata", "cloud computing", "private cloud", "ubuntu enterprise cloud eucalyptus" ]
[ "P", "P", "P", "P", "M" ]
1evzY8B
A systematic literature review on SOA migration
When Service Orientation was introduced as the solution for retaining and rehabilitating legacy assets, both researchers and practitioners proposed techniques, methods, and guidelines for SOA migration. With so much hype surrounding SOA, it is not surprising that the concept was interpreted in many different ways, and consequently, different approaches to SOA migration were proposed. Accordingly, soon there was an abundance of methods that were hard to compare and eventually adopt. Against this backdrop, this paper reports on a systematic literature review that was conducted to extract the categories of SOA migration proposed by the research community. We provide the state-of-the-art in SOA migration approaches, and discuss categories of activities carried out and knowledge elements used or produced in those approaches. From such categorization, we derive a reference model, called SOA migration frame of reference, that can be used for selecting and defining SOA migration approaches. As a co-product of the analysis, we shed light on how SOA migration is perceived in the field, which further points to promising future research directions. Copyright 2015 John Wiley & Sons, Ltd.
[ "systematic literature review", "migration", "service orientation", "knowledge management" ]
[ "P", "P", "P", "M" ]
1AWx24R
Application of projection pursuit learning to boundary detection and deblurring in images
Projection pursuit learning networks (PPLNs) have been used in many fields of research but have not been widely used in image processing. In this paper we demonstrate how this highly promising technique may be used to connect edges and produce continuous boundaries. We also propose the application of PPLN to deblurring a degraded image when little or no a priori information about the blur is available. The PPLN was successful at developing an inverse blur filter to enhance blurry images. Theory and background information on projection pursuit regression (PPR) and PPLN are also presented.
[ "boundary detection", "projection pursuit learning networks", "projection pursuit regression", "image deblurring" ]
[ "P", "P", "P", "R" ]
4qFoBiX
Learning to transform time series with a few examples
We describe a semisupervised regression algorithm that learns to transform one time series into another time series given examples of the transformation. This algorithm is applied to tracking, where a time series of observations from sensors is transformed to a time series describing the pose of a target. Instead of defining and implementing such transformations for each tracking task separately, our algorithm learns a memoryless transformation of time series from a few example input-output mappings. The algorithm searches for a smooth function that fits the training examples and, when applied to the input time series, produces a time series that evolves according to assumed dynamics. The learning procedure is fast and lends itself to a closed-form solution. It is closely related to nonlinear system identification and manifold learning techniques. We demonstrate our algorithm on the tasks of tracking RFID tags from signal strength measurements, recovering the pose of rigid objects, deformable bodies, and articulated bodies from video sequences. For these tasks, this algorithm requires significantly fewer examples compared to fully supervised regression algorithms or semisupervised learning algorithms that do not take the dynamics of the output time series into account.
[ "nonlinear system identification", "manifold learning", "semisupervised learning", "example-based tracking" ]
[ "P", "P", "P", "M" ]
4X:UmxH
Almost periodic solutions to abstract semilinear evolution equations with Stepanov almost periodic coefficients
In this paper, almost periodicity of the abstract semilinear evolution equation u'(t) = A(t)u(t) f(t, u(t)) with Stepanov almost periodic coefficients is discussed. We establish a new composition theorem of Stepanov almost periodic functions; and, with its help, we study the existence and uniqueness of almost periodic solutions to the above semilinear evolution equation. Our results are even new for the case of A(t) equivalent to A.
[ "almost periodic", "semilinear evolution equations", "stepanov almost periodic", "banach space" ]
[ "P", "P", "P", "U" ]
4185mPS
phoenix-based clone detection using suffix trees
A code clone represents a sequence of statements that are duplicated in multiple locations of a program. Clones often arise in source code as a result of multiple cut/paste operations on the source, or due to the emergence of crosscutting concerns. Programs containing code clones can manifest problems during the maintenance phase. When a fault is found or an update is needed on the original copy of a code section, all similar clones must also be found so that they can be fixed or updated accordingly. The ability to detect clones becomes a necessity when performing maintenance tasks. However, if done manually, clone detection can be a slow and tedious activity that is also error prone. A tool that can automatically detect clones offers a significant advantage during software evolution. With such an automated detection tool, clones can be found and updated in less time. Moreover, restructuring or refactoring of these clones can yield better performance and modularity in the program.This paper describes an investigation into an automatic clone detection technique developed as a plug-in for Microsoft's new Phoenix framework. Our investigation finds function-level clones in a program using abstract syntax trees (ASTs) and suffix trees. An AST provides the structural representation of the code after the lexical analysis process. The AST nodes are used to generate a suffix tree, which allows analysis on the nodes to be performed rapidly. We use the same methods that have been successfully applied to find duplicate sections in biological sequences to search for matches on the suffix tree that is generated, which in turn reveal matches in the code.
[ "clone detection", "suffix trees", "code clones", "software analysis" ]
[ "P", "P", "P", "R" ]
-u8RgeH
Slimeware: Engineering Devices with Slime Mold
The plasmodium of the acellular slime mold Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioral patterns in response to environmental conditions. In a series of simple experiments we demonstrate how to make computing, sensing, and actuating devices from the slime mold. We show how to program living slime mold machines by configurations of repelling and attracting gradients and demonstrate the workability of the living machines on tasks of computational geometry, logic, and arithmetic.
[ "slime mold", "parallel biological computers", "amorphous computers", "living technology" ]
[ "P", "M", "M", "M" ]
19W5h6D
Automated aspect-oriented decomposition of process-control systems for ultra-high dependability assurance
This paper presents a method for decomposing process-control systems. This decomposition method is automated, meaning that a series of principles that can be evolved to support automated tools are given to help a designer decompose complex systems into a collection of simpler components. Each component resulting from the decomposition process can be designed and implemented independently of the other components. Also, these components can be tested or verified by the end-user independently of each other. Moreover, the system properties, such as safety, stability, and reliability, can be mathematically inferred from the properties of the individual components. These components are referred to as IDEAL ( Independently Developable End-user Assessable Logical) components. This decomposition method is applied to a case study specified by the High-Integrity Systems group at Sandia National Labs, which involves the control of a future version of the Bay Area Rapid Transit ( BART) system.
[ "process-control systems", "dependability assurance", "software decomposition", "aspect-oriented modeling" ]
[ "P", "P", "M", "M" ]
19zEEYL
Diffusion-Confusion Based Light-Weight Security for Item-RFID Tag-Reader Communication
In this paper we propose a challenge-response protocol called: DCSTaR, which takes a novel approach to solve security issues that are specific to low-cost item-RFID tags. Our DCSTaR protocol is built upon light-weight primitives such as 16 bit: Random Number Generator, Exclusive-OR, and Cyclic Redundancy Check and utilizing these primitives it also provides a simple Diffusion-Confusion cipher to encrypt the challenge and response from the tag to the RFID reader. As a result our protocol achieves RFID tag-reader-server mutual authentication, communicating-data confidentiality and integrity, secure key-distribution and key-protection. It also provides an efficient way for consumers to verify whether tagged items are genuine or fake and to protect consumers' privacy while carrying tagged items.
[ "rfid", "diffusion-confusion cipher", "tag-reader communication security", "light-weight cryptography", "customer privacy", "epcglobal class-1 gen-2" ]
[ "P", "P", "R", "M", "M", "U" ]
J6FRmjj
On solutions of functional-integral equations of Urysohn type on an unbounded interval
In this paper we establish the existence of solutions of functional-integral and quadratic Urysohn integral on the interval R(+) = [0, infinity). The technique of proving applied in this paper is based on the concept of measure of noncompactness and the fixed point theorem. Some new results are given. (c) 2007 Elsevier Ltd. All rights reserved.
[ "measure of noncompactness", "fixed point theorem", "nonlinear integral equation" ]
[ "P", "P", "M" ]
31R&DND
a cultural probes study on video sharing and social communication on the internet
The focus of this article is the link between video sharing and interpersonal communication on the internet. Previous works on social television systems belong to two categories: 1) studies on how collocated groups of viewers socialize while watching TV, and 2) studies on novel Social TV applications (e.g. experimental set-ups) and devices (e.g. ambient displays) that provide technological support for TV sociability over a distance. The main shortcoming of those studies is that they have not considered the dominant contemporary method of Social TV. Early adopters of technology have been watching and sharing video online. We employed cultural probes in order to gain in-depth information about the social aspect of video sharing on the internet. Our sample consisted of six heavy users of internet video, watching an average of at least one hour of internet video a day. In particular, we explored how they are integrating video into their daily social communication practices. We found that internet video is shared and discussed with distant friends. Moreover, the results of the study indicate several opportunities and threats for the development of integrated mass and interpersonal communication applications and services.
[ "cultural probes", "internet video", "online communication", "user study" ]
[ "P", "P", "R", "R" ]
3RjrMK:
Phenotypic Modulation of Vascular Smooth Muscle Cells
The smooth muscle myosin heavy chain (MHC) gene and its isoforms are excellent molecular markers that reflect smooth muscle phenotypes. The SMemb/Nonmuscle Myosin Heavy Chain B (NMHC-B) is a distinct MHC gene expressed predominantly in phenotypically modulated SMCs (synthetic-type SMC). To dissect the molecular mechanisms governing phenotypic modulation of SMCs, we analyzed the transcriptional regulatory mechanisms underlying expression of the SMemb gene. We previously reported two transcription factors, BTEB2/IKLF and Hex, which transactivate the SMemb gene promoter based on the transient reporter transfection assays. BTEB2/IKLF is a zinc finger transcription factor, whereas Hex is a homeobox protein. BTEB2/IKLF expression in SMCs is downregulated with vascular development in vivo but upregulated in cultured SMCs and in neointima in response to vascular injury after balloon angioplasty. BTEB2/IKLF and Hex activate not only the SMemb gene but also other genes activated in synthetic SMCs including plasminogen activator inhibitor-1 (PAI-1), iNOS, PDGF-A, Egr-1, and VEGF receptors. Mitogenic stimulation activates BTEB2/IKLF gene expression through MEK1 and Egr-1. Elevation of intracellular cAMP is also important in phenotypic modulation of SMCs, because the SMemb promoter is activated under cooperatively by cAMP-response element binding protein (CREB) and Hex.
[ "phenotypic modulation", "vascular smooth muscle cells" ]
[ "P", "P" ]
-F8-TSG
Intent specifications: An approach to building human-centered specifications
This paper examines and proposes an approach to writing software specifications, based on research in systems theory, cognitive psychology, and human-machine interaction. The goal is to provide specifications that support human problem solving and the tasks that humans must perform in software development and evolution. A type of specification, called intent specifications, is constructed upon this underlying foundation.
[ "human-centered specifications", "requirements", "requirements specification", "safety-critical software", "software evolution", "means-ends hierarchy", "cognitive engineering" ]
[ "P", "U", "M", "M", "R", "U", "M" ]
-5GKUQ2
Point equivalence of second-order ODEs: Maximal invariant classification order
We show that the local equivalence problem of second-order ordinary differential equations under point transformations is completely characterized by differential invariants of order at most 10 and that this upper bound is sharp. We also demonstrate that, modulo Cartan duality and point transformations, the Painlev-I equation can be characterized as the simplest second-order ordinary differential equation belonging to the class of equations requiring 10th order jets for their classification.
[ "53a55" ]
[ "U" ]
3J99pX:
An improved evaluation of ladder logic diagrams and Petri nets for the sequence controller design in manufacturing systems
Sequence controller designs play a key role in advanced manufacturing systems. Traditionally, the ladder logic diagram (LLD) has been widely applied to programmable logic controllers (PLC), while recently the Petri net (PN) has emerged as an alternative tool for the sequence control of complex systems. The evaluation of both approaches has become crucial and has thus attracted attention.
[ "ladder logic diagrams", "petri nets", "sequence controllers", "manufacturing systems", "plc" ]
[ "P", "P", "P", "P", "P" ]
4bstRvU
Lightweight detection of node presence in MANETs
While mobility in the sense of node movement has been an intensively studied aspect of mobile ad hoc networks (MANETs), another aspect of mobility has not yet been subjected to systematic research: nodes may not only move around but also enter and leave the network. In fact, many proposed protocols for MANETs exhibit worst case behavior when an intended communication partner is currently not present. Therefore, knowing whether a given node is currently present in the network can often help to avoid unnecessary overhead. In this paper, we present a solution to the presence detection problem. It uses a Bloom filter-based beaconing mechanism to aggregate and distribute information about the presence of network nodes. We describe the algorithm and discuss design alternatives. We assess the algorithms properties both analytically and through simulation, and thereby underline the effectiveness and applicability of our approach.
[ "manets", "mobile ad hoc networks", "presence detection", "soft state bloom filter" ]
[ "P", "P", "P", "M" ]
1n5Rk-1
An integrated toolchain for model based functional safety analysis ?
We design a complete toolchain for integrating fault tolerance analysis into modeling. The goal of this work is to bridge the gap between the different specialized tools available. Having an integrated environment will reduce errors, ensure coherence and simplify analysis.
[ "safety analysis", "bayesian networks", "model-based design", "functional testing" ]
[ "P", "U", "M", "M" ]
-iVyzsj
SCALE INVARIANT FEATURE MATCHING USING ROTATION-INVARIANT DISTANCE FOR REMOTE SENSING IMAGE REGISTRATION
Scale invariant feature transform (SIFT) has been widely used in image matching. But when SIFT is introduced in the registration of remote sensing images, the keypoint pairs which are expected to be matched are often assigned two different value of main orientation owing to the significant difference in the image intensity between remote sensing image pairs, and therefore a lot of incorrect matches of keypoints will appear. This paper presents a method using rotation-invariant distance instead of Euclid distance to match the scale invariant feature vectors associated with the keypoints. In the proposed method, the feature vectors are reorganized into feature matrices, and fast Fourier transform (FFT) is introduced to compute the rotation-invariant distance between the matrices. Much more correct matches are obtained by the proposed method since the rotation-invariant distance is independent of the main orientation of the keypoints. Experimental results indicate that the proposed method improves the match performance compared to other state-of-art methods in terms of correct match rate and aligning accuracy.
[ "feature matching", "rotation-invariance distance", "remote sensing image", "image registration", "sift", "main orientation" ]
[ "P", "P", "P", "P", "P", "P" ]
43v95cb
computer-related gender differences
Computer-related gender differences are examined using survey responses from 651 college students. Issues studied include gender differences regarding interest and enjoyment of both using a computer and computer programming. Interesting gender differences with implications for teaching are examined for the groups (family, teachers, friends, others) that have the most influence on students' interest in computers. Traditional areas such as confidence, career understanding and social bias are also discussed. Preliminary results for a small sample of technology majors indicate that computer majors have unique interests and attitudes compared to other science majors.
[ "gender issues" ]
[ "R" ]
25TaQfg
Analysis of EEG signals by combining eigenvector methods and multiclass support vector machines
A new approach based on the implementation of multiclass support vector machine (SVM) with the error correcting output codes (ECOC) is presented for classification of electroencephalogram (EEG) signals. In practical applications of pattern recognition, there are often diverse features extracted from raw data which needs recognizing. Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the classifiers trained on the extracted features. The aim of the study is classification of the EEG signals by the combination of eigenvector methods and multiclass SVM. The purpose is to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. The present research demonstrated that the eigenvector methods are the features which well represent the EEG signals and the multiclass SVM trained on these features achieved high classification accuracies.
[ "eigenvector methods", "multiclass support vector machine (svm)", "electroencephalogram (eeg) signals" ]
[ "P", "P", "P" ]
nJBM&JE
lambda-RBAC: PROGRAMMING WITH ROLE-BASED ACCESS CONTROL
We study mechanisms that permit program components to express role constraints on clients, focusing on programmatic security mechanism, which permit access controls to be expressed, in situ, as part of the code realizing basic functionality. In this setting, two questions immediately arise. (1) The user of a component faces the issue of safety: is a particular role sufficient to use the component? (2) The component designer faces the dual issue of protection: is a particular role demanded in all execution paths of the component? We provide a formal calculus and static analysis to answer both questions.
[ "role-based access control", "static analysis", "lambda-calculus" ]
[ "P", "P", "U" ]
3CE3Z8i
Output-only Modal Analysis using Continuous-Scan Laser Doppler Vibrometry and application to a 20kW wind turbine
Continuous-Scan Laser Doppler Vibrometry (CSLDV) is a technique where the measurement point continuously sweeps over a structure while measuring, capturing both spatial and temporal information. The continuous-scan approach can greatly accelerate measurements, allowing one to capture spatially detailed mode shapes in the same amount of time that conventional methods require to measure the response at a single point. The method is especially beneficial when testing large structures, such as wind turbines, that have low natural frequencies and hence may require very long time records at each measurement point. Several CSLDV methods have been presented that use sinusoidal excitation or impulse excitation, but CSLDV has not previously been employed with an unmeasured, broadband random input. This work extends CSLDV to that class of input, developing an Output-only Modal Analysis method (OMA-CSLDV). A recently developed algorithm for linear time-periodic system identification, which makes use of harmonic power spectra and the harmonic transfer function concept developed by Wereley [17], is used in conjunction with CSLDV measurements. One key consideration, the choice of the scan frequency, is explored. The proposed method is validated on a randomly excited free-free beam, where one-dimensional mode shapes are captured by scanning the laser along the length of the beam. The first seven natural frequencies and mode shapes are extracted from the harmonic power spectrum of the vibrometer signal and show good agreement with the analytically-derived modes of the beam. The method is then applied to identify the mode shapes of a parked 20kW wind turbine using a ground based laser and with only a light breeze providing excitation.
[ "output-only modal analysis", "laser doppler vibrometry", "modal identification", "operational modal analysis", "periodically time varying" ]
[ "P", "P", "R", "M", "M" ]
27T7TxR
Preferences in Wikipedia abstracts: Empirical findings and implications for automatic entity summarization
We empirically study how Wikipedians summarize entity descriptions in practice. We compare entity descriptions in DBpedia with their Wikipedia abstracts. We analyze the length of a summary and the priorities of property values. We analyze the priorities of, diversity of, and correlation between properties. Implications for automatic entity summarization are drawn from the findings.
[ "wikipedia", "entity summarization", "dbpedia", "feature selection", "property ranking" ]
[ "P", "P", "P", "U", "M" ]
3nmFdMR
Multi-organ localization with cascaded global-to-local regression and shape prior
We propose a fast and robust method for multiple organs localization. Our method provides organ-dedicated confidence maps for each organ. It extends the cascade of random forest with additional shape prior. The values of the testing and learning parameters can be explained physically. We evaluate our method on 130 CT volumes and show its good accuracy.
[ "multi-organ localization", "regression", "random forest", "3d ct", "abdominal organs" ]
[ "P", "P", "P", "M", "M" ]
31Nksja
Ordered interval routing schemes
An Interval Routing Scheme (IRS) represents the routing tables in a network in a space-efficient way by labeling each vertex with an unique integer address, and the outgoing edges at each vertex with disjoint subintervals of these addresses. An IRS that has at most k intervals per edge label is called a k-IRS. In this paper, we propose a new type of interval routing scheme, called an Ordered Interval Routing Scheme (OIRS), that uses an ordering of the outgoing edges at each vertex and allows non-disjoint intervals in the labels of those edges. We show for a number of graph classes that using an OIRS instead of an IRS reduces the size of the routing tables in the case of optimal routing, i.e., routing along shortest paths. We show that optimal routing in any k -tree is possible using an OIRS with at most 2k?1 2 k ? 1 intervals per edge label, although the best known result for an IRS is 2k+1 2 k + 1 intervals per edge label. Any torus has an optimal 1-OIRS, although it may not have an optimal 1-IRS. We present similar results for the Petersen graph, k-garland graphs and a few other graphs.
[ "interval routing", "routing table", "oirs" ]
[ "P", "P", "P" ]
3hivb3&
Performance analysis in non-Rayleigh and non-Rician communications channels
This paper investigates the probability of erasure for mobile communication channels containing limited number of scatterers. Two kinds of channels with and without line of sight are examined. The resultant data is depicted by graphs to express the differences in existing theoretical models more clearly. The results indicate that the probability of erasure is different from that of predicted by both Rayleigh and Rician models for small number of scatterers.
[ "mobile communications", "fading", "non-rayleigh and non-rician channel" ]
[ "P", "U", "R" ]
LDAbyDP
Computational geometry column 41
The recent result that n congruent balls in R(d) have at most 4 distinct geometric permutations is described.
[ "geometric permutation", "line transversal", "stabbing" ]
[ "P", "U", "U" ]
2D41shL
Trends of environmental information systems in the context of the European Water Framework Directive
In Europe, the development of Environmental Information Systems for the water domain is heavily influenced by the need to support the processes of the European Water Framework Directive (WFD). The aim of the WFD is to ensure that all European waters, these being groundwater, surface or coastal waters, are protected according to a common standard. While the WFD itself does only include concrete information technology (IT) recommendations on a very high-level of data exchange, regional and/or national environmental agencies build or adapt their information systems according to their specific requirements in order to deliver the results for the first WFD reporting phase on time. Moreover, as the WFD requires a water management policy centered on natural river basin districts instead of administrative and political regions, the agencies have to co-ordinate their work, possibly across national borders. On this background, the present article analyses existing IT recommendations for the WFD implementation strategy and motivates the need to develop an IT Framework Architecture that comprises different views such as an organisational, a process, a data and a functional view. After having presented representative functions of operational water body information systems for the thematic and the co-operation layer, the article concludes with a summary of future IT developments that are required to efficiently support the WFD implementation.
[ "environmental information systems", "water framework directive", "wfd", "eis", "gml", "inspire", "gmes", "java", "ogc" ]
[ "P", "P", "P", "U", "U", "U", "U", "U", "U" ]
2JuMQNi
A finite volume method for viscous incompressible flows using a consistent flux reconstruction scheme
An incompressible Navier-Stokes solver using curvilinear body-fitted collocated grid has been developed to solve unconfined flow past arbitrary two-dimensional body geometries. In this solver, the full Navier-Stokes equations have been solved numerically in the physical plane itself without using any transformation to the computational plane. For the proper coupling of pressure and velocity field on collocated grid, a new scheme, designated 'consistent flux reconstruction' (CFR) scheme, has been developed. In this scheme, the cell face centre velocities are obtained explicitly by solving the momentum equations at the centre of the cell faces. The velocities at the cell centres are also updated explicitly by solving the momentum equations at the cell centres. By resorting to such a fully explicit treatment considerable simplification has been achieved compared to earlier approaches. In the present investigation the solver has been applied to unconfined flow past a square cylinder at zero and non-zero incidence at low and moderate Reynolds numbers and reasonably good agreement has been obtained with results available from literature. Copyright (c) 2006 John Wiley & Sons, Ltd.
[ "finite volume method", "consistent flux reconstruction", "incompressible navier-stokes solver", "physical plane", "curvilinear collocated grid", "explicit-explicit scheme" ]
[ "P", "P", "P", "P", "R", "M" ]
2NxmViw
practical online retrieval evaluation
Online evaluation is amongst the few evaluation techniques available to the information retrieval community that is guaranteed to reflect how users actually respond to improvements developed by the community. Broadly speaking, online evaluation refers to any evaluation of retrieval quality conducted while observing user behavior in a natural context. However, it is rarely employed outside of large commercial search engines due primarily to a perception that it is impractical at small scales. The goal of this tutorial is to familiarize information retrieval researchers with state-of-the-art techniques in evaluating information retrieval systems based on natural user clicking behavior, as well as to show how such methods can be practically deployed. In particular, our focus will be on demonstrating how the Interleaving approach and other click based techniques contrast with traditional offline evaluation, and how these online methods can be effectively used in academic-scale research. In addition to lecture notes, we will also provide sample software and code walk-throughs to showcase the ease with which Interleaving and other click-based methods can be employed by students, academics and other researchers.
[ "online evaluation", "interleaving", "preference judgments", "web search", "clickthrough data" ]
[ "P", "P", "U", "M", "U" ]
1s3BKAW
A privacy-preserving clustering approach toward secure and effective data analysis for business collaboration ?
The sharing of data has been proven beneficial in data mining applications. However, privacy regulations and other privacy concerns may prevent data owners from sharing information for data analysis. To resolve this challenging problem, data owners must design a solution that meets privacy requirements and guarantees valid data clustering results. To achieve this dual goal, we introduce a new method for privacy-preserving clustering called Dimensionality Reduction-Based Transformation (DRBT). This method relies on the intuition behind random projection to protect the underlying attribute values subjected to cluster analysis. The major features of this method are: (a) it is independent of distance-based clustering algorithms; (b) it has a sound mathematical foundation; and (c) it does not require CPU-intensive operations. We show analytically and empirically that transforming a data set using DRBT, a data owner can achieve privacy preservation and get accurate clustering with a little overhead of communication cost.
[ "privacy-preserving clustering", "random projection", "privacy-preserving data mining", "dimensionality reduction", "privacy-preserving clustering over centralized data", "privacy-preserving clustering over vertically partitioned data" ]
[ "P", "P", "R", "M", "M", "M" ]
-SnFD5G
Unified read requests
Most work on multimedia storage systems has assumed that clients will be serviced using a round-robin strategy. The server services the clients in rounds and each client is allocated a time slice within that round. Furthermore, most such algorithms are evaluated on the basis of a tightly specified cost function. This is the basis for well known algorithms such as FCFS, SCAN, SCAN-EDF, etc. In this paper, we describe a Request Merging (RM) module that takes as input, a set of client requests, and a set of constraints on the desired performance such as client waiting time or maximum disk bandwidth, and a cost function. It produces as output, a Unified Read Request (URR), telling the storage server which data items to read, and when the clients would like these data items to be delivered to them. Given a cost function cf, a URR is optimal if there is no other URR satisfying the constraints with a lower cost. We present three algorithms in this paper, each of which accomplishes this kind of request merging. The first algorithm OptURR is guaranteed to produce minimal cost URRs with respect to arbitrary cost functions. In general, the problem of computing an optimal URR is NP-complete, even when only two data objects are considered. To alleviate this problem, we develop two other algorithms, called GreedyURR and FastURR that may produce sub-optimal URRs, but which have some nicer computational properties. We will report on the pros and cons of these algorithms through an experimental evaluation.
[ "cost function", "request merging", "optimality", "multimedia storage server" ]
[ "P", "P", "P", "R" ]
2j55ZJ-
Brain-Computer Evolutionary Multiobjective Optimization: A Genetic Algorithm Adapting to the Decision Maker
The centrality of the decision maker (DM) is widely recognized in the multiple criteria decision-making community. This translates into emphasis on seamless human-computer interaction, and adaptation of the solution technique to the knowledge which is progressively acquired from the DM. This paper adopts the methodology of reactive search optimization (RSO) for evolutionary interactive multiobjective optimization. RSO follows to the paradigm of "learning while optimizing," through the use of online machine learning techniques as an integral part of a self-tuning optimization scheme. User judgments of couples of solutions are used to build robust incremental models of the user utility function, with the objective to reduce the cognitive burden required from the DM to identify a satisficing solution. The technique of support vector ranking is used together with a k-fold cross-validation procedure to select the best kernel for the problem at hand, during the utility function training procedure. Experimental results are presented for a series of benchmark problems.
[ "reactive search optimization", "machine learning", "support vector ranking", "interactive decision making" ]
[ "P", "P", "P", "M" ]
j&tGLuN
Linear Separability of Gene Expression Data Sets
We study simple geometric properties of gene expression data sets, where samples are taken from two distinct classes (e.g., two types of cancer). Specifically, the problem of linear separability for pairs of genes is investigated. If a pair of genes exhibits linear separation with respect to the two classes, then the joint expression level of the two genes is strongly correlated to the phenomena of the sample being taken from one class or the other. This may indicate an underlying molecular mechanism relating the two genes and the phenomena (e. g., a specific cancer). We developed and implemented novel efficient algorithmic tools for finding all pairs of genes that induce a linear separation of the two sample classes. These tools are based on computational geometric properties and were applied to 10 publicly available cancer data sets. For each data set, we computed the number of actual separating pairs and compared it to an upper bound on the number expected by chance and to the numbers resulting from shuffling the labels of the data at random empirically. Seven out of these 10 data sets are highly separable. Statistically, this phenomenon is highly significant, very unlikely to occur at random. It is therefore reasonable to expect that it manifests a functional association between separating genes and the underlying phenotypic classes.
[ "linear separation", "gene expression analysis", "dna microarrays", "diagnosis" ]
[ "P", "M", "U", "U" ]
4wUX28n
A language for representing and extracting 3D geometry semantics from paper-based sketches ?
The key contribution is a visual language to formally represent form geometry semantics on paper. Parsing the language allows for the automatic generation of 3D virtual models. A proof-of-concept prototype tool was implemented. The language is capable to roughly model forms with linear topological ordering. Evaluation results show that practising designers would use the language.
[ "humancomputer interaction", "computer-aided sketching", "3d modelling" ]
[ "U", "M", "R" ]
45ojxdB
Decentralized list scheduling
Classical list scheduling is a very popular and efficient technique for scheduling jobs for parallel and distributed platforms. It is inherently centralized. However, with the increasing number of processors, the cost for managing a single centralized list becomes too prohibitive. A suitable approach to reduce the contention is to distribute the list among the computational units: each processor only has a local view of the work to execute. Thus, the scheduler is no longer greedy and standard performance guarantees are lost.
[ "scheduling", "list algorithms", "work stealing" ]
[ "P", "M", "M" ]
-P9EQLn
Antenna impedance matching with neural networks
Impedance matching between transmission lines and antennas is an important and fundamental concept in electromagnetic theory. One definition of antenna impedance is the resistance and reactance seen at the antenna terminals or the ratio of electric to magnetic fields at the input. The primary intent of this paper is real-time compensation for changes in the driving point impedance of an antenna due to frequency deviations. In general, the driving point impedance of an antenna or antenna array is computed by numerical methods such as the method of moments or similar techniques. Some configurations do lend themselves to analytical solutions, which will be the primary focus of this work. This paper employs a neural control system to match antenna feed lines to two common antennas during frequency sweeps. In practice, impedance matching is performed off-line with Smith charts or relatively complex formulas but they rarely perform optimally over a large bandwidth. There have been very few attempts to compensate for matching errors while the transmission system is in operation and most techniques have been targeted to a relatively small range of frequencies. The approach proposed here employs three small neural networks to perform real-time impedance matching over a broad range of frequencies during transmitter operation. Double stub tuners are being explored in this paper but the approach can certainly be applied to other methodologies. The ultimate purpose of this work is the development of an inexpensive microcontroller-based system.
[ "impedance matching", "control system", "vswr" ]
[ "P", "P", "U" ]
1g4D1RB
Cellular Automata over Group Alphabets: Undergraduate Education and the PascGalois Project
This purpose of this note is to report efforts underway in the PascGalois Project (www.pascgalois.org) to provide connections between standard courses in the undergraduate mathematics curriculum (e.g. abstract algebra, number theory, discrete mathematics) and cellular automata. The value of these connections to the mathematical education of undergraduates will be described. Project course supplements, supporting software, and areas of student research will also be summarized.
[ "group alphabets", "pascgalois project", "abstract algebra", "pascgalois je", "fractal dimensions", "growth rate dimensions", "undergraduate research" ]
[ "P", "P", "P", "M", "U", "U", "R" ]
5137LNB
from structured documents to novel query facilities
Structured documents (e.g., SGML) can benefit a lot from database support and more specifically from object-oriented database (OODB) management systems. This paper describes a natural mapping from SGML documents into OODB's and a formal extension of two OODB query languages (one SQL-like and the other calculus) in order to deal with SGML document retrieval. Although motivated by structured documents, the extensions of query languages that we present are general and useful for a variety of other OODB applications. A key element is the introduction of paths as first class citizens. The new features allow to query data (and to some extent schema) without exact knowledge of the schema in a simple and homogeneous fashion.
[ "structure", "documentation", "query", "data", "database", "support", "object-oriented database", "management", "systems", "paper", "map", "formalism", "extensibility", "query languages", "order", "document retrieval", "general", "applications", "class", "feature", "schema", "knowledge", "sql" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U" ]
-299p&n
Determination of Oxidized Low-Density Lipoproteins (ox-LDL) versus ox-LDL/?2GPI Complexes for the Assessment of Autoimmune-Mediated Atherosclerosis
The immunolocalization of oxidized low-density lipoproteins (ox-LDL), ?2-glycoprotein I (?2GPI), CD4+/CD8+ immunoreactive lymphocytes, and immunoglobulins in atherosclerotic lesions strongly suggested an active participation of the immune system in atherogenesis. Oxidative stress leading to ox-LDL production is thought to play a central role in both the initiation and progression of atherosclerosis. ox-LDL is highly proinflammatory and chemotactic for macrophage/monocyte and immune cells. Enzyme-linked immunosorbent assays (ELISAs) to measure circulating ox-LDL have been developed and are being currently used to assess oxidative stress as risk factor or marker of atherosclerotic disease. ox-LDL interacts with ?2GPI and circulating ox-LDL/?2GPI complexes have been demonstrated in patients with systemic lupus erythematosus (SLE) and antiphospholipid syndrome (APS). It has been postulated that ?2GPI binds ox-LDL to neutralize its proinflammatory and proatherosclerotic effects. Because ?2GPI is ubiquitous in plasma, its interaction with ox-LDL may mask oxidized epitopes recognized by capture antibodies potentially interfering with immunoassays results. The measurement of ox-LDL/?2GPI complexes may circumvent this interference representing a more physiological and accurate way of measuring ox-LDL
[ "oxidized low-density lipoprotein (ox-ldl)", "atherosclerosis", "?2-glycoprotein i (?2gpi", "oxidative stress", "elisa", "autoimmunity" ]
[ "P", "P", "P", "P", "P", "U" ]
-r3Ai&Y
Text document clustering based on frequent word meaning sequences ?
Most of existing text clustering algorithms use the vector space model, which treats documents as bags of words. Thus, word sequences in the documents are ignored, while the meaning of natural languages strongly depends on them. In this paper, we propose two new text clustering algorithms, named Clustering based on Frequent Word Sequences (CFWS) and Clustering based on Frequent Word Meaning Sequences (CFWMS). A word is the word form showing in the document, and a word meaning is the concept expressed by synonymous word forms. A word (meaning) sequence is frequent if it occurs in more than certain percentage of the documents in the text database. The frequent word (meaning) sequences can provide compact and valuable information about those text documents. For experiments, we used the Reuters-21578 text collection, CISI documents of the Classic data set [Classic data set, ftp://ftp.cs.cornell.edu/pub/smart/], and a corpus of the Text Retrieval Conference (TREC) [High Accuracy Retrieval from Documents (HARD) Track of Text Retrieval Conference, 2004]. Our experimental results show that CFWS and CFWMS have much better clustering accuracy than Bisecting k-means (BKM) [M. Steinbach, G. Karypis, V. Kumar, A Comparison of Document Clustering Techniques, KDD-2000 Workshop on Text Mining, 2000], a modified bisecting k-means using background knowledge (BBK) [A. Hotho, S. Staab, G. Stumme, Ontologies improve text document clustering, in: Proceedings of the 3rd IEEE International Conference on Data Mining, 2003, pp. 541544] and Frequent Itemset-based Hierarchical Clustering (FIHC) [B.C.M. Fung, K. Wang, M. Ester, Hierarchical document clustering using frequent itemsets, in: Proceedings of SIAM International Conference on Data Mining, 2003] algorithms.
[ "text documents", "clustering", "frequent word meaning sequences", "frequent word sequences", "web search", "wordnet" ]
[ "P", "P", "P", "P", "U", "U" ]
17&hghF
an online approach based on locally weighted learning for short-term traffic flow prediction
Traffic flow prediction is a basic function of Intelligent Transportation System. Due to the complexity of traffic phenomenon, most existing methods build complex models such as neural networks for traffic flow prediction. As a model may lose effect with time lapse, it is important to update the model on line. However, the high computational cost of maintaining a complex model puts great challenge for model updating. The high computation cost lies in two aspects: computation of complex model coefficients and huge amount training data for it. In this paper, we propose to use a nonparametric approach based on locally weighted learning to predict traffic flow. Our approach incrementally incorporates new data to the model and is computationally efficient, which makes it suitable for online model updating and predicting. In addition, we adopt wavelet analysis to extract the periodic characteristic of the traffic data, which is then used for the input of the prediction model instead of the raw traffic flow data. The primary experiments on real data demonstrate the effectiveness and efficiency of our approach.
[ "traffic", "prediction", "online locally weighted learning", "real time" ]
[ "P", "P", "R", "R" ]
-Kh3haY
usable computing on open distributed systems
An open distributed system provides a best-effort guarantee on the quality of service provided to applications. This has worked well for throughput-based applications of the kind typically executed in Condor or BOINCstyle environments. For other applications, the absence of timeliness of correctness guarantees limit the utility or appeal of this environment. Computational results that are too late or erroneous are not usable to the application. We present techniques designed to efficiently promote usable computing in open distributed systems.
[ "autonomic computing", "grid computing", "computing paradigm" ]
[ "M", "M", "M" ]
-YeoGh5
A note on the not 3-choosability of some families of planar graphs
A graph G is L-list colorable if for a given list assignment L = {L(v): v epsilon V}, there exists a proper coloring c of G such that c(v) epsilon L(v) for all v epsilon V. If G is L-list colorable for any list assignment with vertical bar L(v)vertical bar >= k for all v epsilon V, then G is said k-choosable. In [M. Voigt, A not 3-choosable planar graph without 3-cycles, Discrete Math. 146 (1995) 325-328] and [M. Voigt, A non-3-choosable planar graph without cycles of length 4 and 5, 2003, Manuscript], Voigt gave a planar graph without 3-cycles and a planar graph without 4-cycles and 5-cycles which are not 3-choosable. In this note, we give smaller and easier graphs than those proposed by Voigt and suggest an extension of Erdos' relaxation of Steinberg's conjecture to 3-choosability. (c) 2006 Elsevier B.V. All rights reserved.
[ "coloring", "combinatorial problems", "list-coloring", "choosability" ]
[ "P", "U", "U", "U" ]
2Fs7uXb
Impedance spectroscopy studies of moisture uptake in low-k dielectrics and its relation to reliability
Water incursion into low-k BEOL capacitors was monitored via impedance spectroscopy. It is a non-destructive, zero DC field, low AC field probe (<0.5V). Samples are tested at device operation conditions and are re-testable. Thermal activation energies related to water bonding with dielectric are measured. The increase in AC loss is correlated with poorer reliability, i.e. early failure.
[ "impedance spectroscopy", "low-k", "reliability", "ac losses", "dielectric relaxation", "time dependent dielectric breakdown" ]
[ "P", "P", "P", "P", "M", "M" ]
2KWao:p
A multi-level depiction method for painterly rendering based on visual perception cue
Increasing the level of detail (LOD) in brushstrokes within areas of interest improved the realism of painterly rendering. Using a modified quad-tree, we segmented an image into areas with similar levels of saliency; each of these segments was then used to control the brush strokes during rendering. We could also simulate real oil painting steps based on saliency information. Our method runs in a reasonable fine and produces results that are visually appealing and competitive with previous techniques.
[ "non-photorealistic rendering", "painting technique", "image saliency" ]
[ "M", "R", "R" ]
-rh71Kx
Preventive replacement for systems with condition monitoring and additional manual inspections
Researched a problem of both condition monitoring and inspection. Defined two types of preventive replacements. Utilized the delay time concept to model the failure process. Formulated a decision problem of two decision variables simultaneously.
[ "condition monitoring", "inspection", "maintenance", "delay-time", "two-stage failure process" ]
[ "P", "P", "U", "U", "M" ]
-EvZMgx
Redundant and force-differentiated systems in engineering and nature
Sophisticated load-carrying structures, in nature as well as man-made, share some common properties. A clear differentiation of tension, compression and shear is in nature primarily manifested in the properties of materials adapted to the efforts, whereas they in engineering are distributed on different components. For stability and failure safety, redundancy on different levels is also commonly used. The paper aims at collecting and expanding previous methods for the computational treatment of redundant and force-differentiated systems. A common notation is sought, giving and developing criteria for describing the diverse problems from a common structural mechanical viewpoint. From this, new criteria for the existence of solutions, and a method for treatment of targeted dynamic solutions are developed. Added aspects to previously described examples aim at emphasizing similarities and differences between engineering and nature, in the forms of a tension truss structure and the human musculoskeletal system.
[ "redundancy", "structures", "mechanisms", "dynamics", "equilibrium", "statics", "target control" ]
[ "P", "P", "P", "P", "U", "U", "M" ]
1WAd&bh
Simultaneous optimization of the material properties and the topology of functionally graded structures
A level set based method is proposed for the simultaneous optimization of the material properties and the topology of functionally graded structures. The objective of the present study is to determine the optimal material properties (via the material volume fractions) and the structural topology to maximize the performance of the structure in a given application. In the proposed method, the volume fraction and the structural boundary are considered as the design variables, with the former being discretized as a scalar field and the latter being implicitly represented by the level set method. To perform simultaneous optimization, the two design variables are integrated into a common objective functional. Sensitivity analysis is conducted to obtain the descent directions. The optimization process is then expressed as the solution to a coupled HamiltonJacobi equation and diffusion partial differential equation. Numerical results are provided for the problem of mean compliance optimization in two dimensions.
[ "level set method", "topology optimization", "dynamic implicit boundary", "functionally graded materials", "heterogeneous objects" ]
[ "P", "R", "M", "R", "M" ]
2CWM::G
The impact of head movements on user involvement in mediated interaction
We examine engagement within conversational behaviours of the subject when interacting with a socially expressive system. We found real-time communication requires more than verbal communication, and head nodding. Head nodding effects depend on precise on-screen movement by synchronize the on-screen movement with the head movement.
[ "head movements", "engagement", "nonverbal behaviours", "face-to-face interaction", "telepresence robot" ]
[ "P", "P", "M", "M", "U" ]
2z8Zb6P
Parsing images into regions, curves, and curve groups
In this paper, we present an algorithm for parsing natural images into middle level vision representations-regions, curves, and curve groups (parallel curves and trees). This algorithm is targeted for an integrated solution to image segmentation and curve grouping through Bayesian inference. The paper makes the following contributions. (1) It adopts a layered (or 2. 1 D-sketch) representation integrating both region and curve models which compete to explain an input image. The curve layer occludes the region layer and curves observe a partial order occlusion relation. (2) A Markov chain search scheme Metropolized Gibbs Samplers (MGS) is studied. It consists of several pairs of reversible jumps to traverse the complex solution space. An MGS proposes the next state within the jump scope of the current state according to a conditional probability like a Gibbs sampler and then accepts the proposal with a Metropolis-Hastings step. This paper discusses systematic design strategies of devising reversible jumps for a complex inference task. (3) The proposal probability ratios in jumps are factorized into ratios of discriminative probabilities. The latter are computed in a bottom-up process, and they drive the Markov chain dynamics in a data-driven Markov chain Monte Carlo framework. We demonstrate the performance of the algorithm in experiments with a number of natural images.
[ "curve grouping", "image segmentation", "metropolized gibbs sampler", "data-driven markov chain monte carlo", "perceptual organization", "graph partition" ]
[ "P", "P", "P", "P", "U", "U" ]
4v4tddn
Laparoscopic Management of Adnexal Masses
Suspected ovarian neoplasm is a common clinical problem affecting women of all ages. Although the majority of adnexal masses are benign, the primary goal of diagnostic evaluation is the exclusion of malignancy. It has been estimated that approximately 510% of women in the United States will undergo a surgical procedure for a suspected ovarian neoplasm during their lifetime. Despite the magnitude of the problem, there is still considerable disagreement regarding the optimal surgical management of these lesions. Traditional management has relied on laparotomy to avoid undertreatment of a potentially malignant process. Advances in detection, diagnosis, and minimally invasive surgical techniques make it necessary now to review this practice in an effort to avoid unnecessary morbidity among patients. Here, we review the literature on the laparosopic approach to the treatment of the adnexal mass without sacrificing the principles of oncologic surgery. We highlight potentials of minimally invasive surgery and address the risks associated with the laparoscopic approach.
[ "adnexal masses", "ovarian neoplasm", "laparotomy" ]
[ "P", "P", "P" ]
2i2wAzi
Dealing with plagiarism in the information systems research community: A look at factors that drive plagiarism and ways to address them
Imagine yourself spending years conducting a research project and having it published as an article in a refereed journal, only to see a plagiarized copy of the article later published in another journal. Then imagine yourself being left to fight for your rights alone, and eventually finding out that it would be very difficult to hold the plagiarist accountable for what he or she did. The recent decision by the Association of Information Systems to create a standing committee on member misconduct suggests that while this type of situation may sound outrageous, it is likely to become uncomfortably frequent in the information systems research community if proper measures are not taken by a community-backed organization. In this article, we discuss factors that can drive plagiarism, as well as potential measures to prevent it. Our goal is to discuss alternative ways in which plagiarism can be prevented and dealt with when it arises. We hope to start a debate that provides the basis on which broader mechanisms to deal with plagiarism can be established, which we envision as being associated with and complementary to the committee created by the Association for Information Systems.
[ "plagiarism", "information systems research", "community", "committees", "ethics" ]
[ "P", "P", "P", "P", "U" ]
-AgQKao
Percolation in the secrecy graph
The secrecy graph is a random geometric graph which is intended to model the connectivity of wireless networks under secrecy constraints. Directed edges in the graph are present whenever a node can talk to another node securely in the presence of eavesdroppers, which, in the model, is determined solely by the locations of the nodes and eavesdroppers. In the case of infinite networks, a critical parameter is the maximum density of eavesdroppers that can be accommodated while still guaranteeing an infinite component in the network, i.e., the percolation threshold. We focus on the case where the locations of the nodes and eavesdroppers are given by Poisson point processes, and present bounds for different types of percolation, including in-, out- and undirected percolation.
[ "percolation", "secrecy graph", "branching process" ]
[ "P", "P", "M" ]
-xnjBHE
Evaluation of Region-of-Interest coders using perceptual image quality assessments
Perceptual image assessment is proposed for coder performance evaluation. Proposed assessment uses a linear combination of perceptual measures just based on features. Region-of-Interest coder perceptual evaluation aims at identifying coder behavior. Some perceptual assessments are adequate to evaluate test coders.
[ "region-of-interest", "quality assessment", "perceptual evaluation", "image coding", "wavelet", "distortion measure", "human visual system", "mean-observed scores", "rate-distortion function" ]
[ "P", "P", "P", "M", "U", "M", "U", "U", "U" ]
3DRDGhh
achieving anycast in dtns by enhancing existing unicast protocols
Many DTN environments, such as emergency response networks and pocket-switched networks, are based on human mobility and communication patterns, which naturally lead to groups. In these scenarios, group-based communication is central, and hence a natural and useful routing paradigm is anycast, where a node attempts to communicate with at least one member of a particular group. Unfortunately, most existing anycast solutions assume connectivity, and the few specifically for DTNs are single-copy in nature and have only been evaluated in highly limited mobility models. In this paper, we propose a protocol-independent method of enhancing a large number of existing DTN unicast protocols, giving them the ability to perform anycast communication. This method requires no change to the unicast protocols themselves and instead changes their world view by adding a thin layer beneath the routing layer. Through a thorough set of simulations, we also evaluate how different parameters and network conditions affect the performance of these newly transformed anycast protocols.
[ "anycast", "dtn", "routing" ]
[ "P", "P", "P" ]
-u6eauL
a framework for supporting data integration using the materialized and virtual approaches
This paper presents a framework for data integration currently under development in the Squirrel project. The framework is based on a special class of mediators, called Squirrel integration mediators. These mediators can support the traditional virtual and materialized approaches, and also hybrids of them.In the Squirrel mediators, a relation in the integrated view can be supported as (a) fully materialized, (b) fully virtual, or (c) partially materialized (i.e., with some attributes materialized and other attributes virtual). In general, (partially) materialized relations of the integrated view are maintained by incremental updates from the source databases. Squirrel mediators provide two approaches for doing this: (1) materialize all needed auxiliary data, so that data sources do not have to be queried when processing the incremental updates; or (2) leave some or all of the auxiliary data virtual, and query selected source databases when processing incremental updates.The paper presents formal notions of consistency and "freshness" for integrated views defined over multiple autonomous source databases. It is shown that Squirrel mediators satisfy these properties.
[ "support", "data", "data integrity", "integrability", "virtualization", "paper", "developer", "project", "class", "mediator", "hybrid", "relation", "views", "attributes", "general", "incremental", "update", "database", "query", "process", "formalism", "consistency", "autonomic", " framework " ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
XEV6LPr
Validation and verification of intelligent systems - what are they and how are they different?
Researchers and practitioners in the field of expert systems all generally agree that to be useful, any fielded intelligent system must be adequately verified and validated. But what does this mean in concrete terms? What exactly is verification? What exactly is validation? How are they different? Many authors have attempted to define these terms and, as a result, several interpretations have surfaced. It is our opinion that there is great confusion as to what these terms mean. how they are different, and how they are implemented. This paper. therefore, has two aims-to clarify the meaning of the terms validation and verification as they apply to intelligent systems, and to describe how several researchers are implementing these. The second part of the paper, therefore, details some techniques that can be used to perform the verification and validation of systems. Also discussed is the role of testing as part of the above-mentioned processes.
[ "validation", "verification", "intelligent systems", "expert systems", "evaluation" ]
[ "P", "P", "P", "P", "U" ]
3SPakzR
Multiple blocking sets and multisets in Desarguesian planes
In AG(2, q (2)), the minimum size of a minimal (q - 1)-fold blocking set is known to be q (3) - 1. Here, we construct minimal (q - 1)-fold blocking sets of size q (3) in AG(2, q (2)). As a byproduct, we also obtain new two-character multisets in PG(2, q (2)). The essential idea in this paper is to investigate q (3)-sets satisfying the opposite of Ebert's discriminant condition.
[ "multiple blocking set", "multiset" ]
[ "P", "P" ]
2GPfXYX
A simple weighting scheme for classification in two-group discriminant problems
This paper introduces a new weighted linear programming model, which is simple and has strong intuitive appeal for two-group classifications. Generally, in applying weights to solve a classification problem in discriminant analysis where the relative importance of every observation is known, larger weights (penalties) will be assigned to those more important observations. The perceived importance of an observation is measured here as the willingness of the decision-maker to misclassify this observation. For instance, a decision-maker is least willing to see a classification rule that misclassifies a top financially strong firm to the group that contains bankrupt firms. Our weighted-linear programming model provides an objective-weighting scheme whereby observations can be weighted according to their perceived importance. The more important this observation, the heavier its assigned weight. Results of a simulation experiment that uses contaminated data show that the weighted linear programming model consistently and significantly outperforms existing linear programming and standard statistical approaches in attaining higher average hit-ratios in the 100 replications for each of the 27 cases tested. Scope and purpose Generally, in applying weights to solve a discriminant problem where the relative importance of every observation is known, larger weights (penalties) will be assigned to those more important observations. However, if decision-makers do not have prior or additional information about the observations, it is very difficult to assign weights to the observations. Subjective judgements from decision-makers may be a way of obtaining those weights. An alternative way is to suggest an objective weighting scheme for obtaining classification weights of observations from the data matrix of the training sample. We suggest a new approach, which provides an objective weighting scheme whereby individual observations can be weighted according to their perceived importance. The more important the observation, the heavier its assigned weight will be. The importance of individual observation is first determined in one of two stages of our model using more than one discriminant function. Simulation experiments are run to test this new approach.
[ "classification", "linear programming", "discriminant analysis", "statistics" ]
[ "P", "P", "P", "P" ]
zFcGB53
HYBRID INTELLIGENT PACKING SYSTEM (HIPS) THROUGH INTEGRATION OF ARTIFICIAL NEURAL NETWORKS, ARTIFICIAL-INTELLIGENCE, AND MATHEMATICAL-PROGRAMMING
A successful solution to the packing problem is a major step toward material savings on the scrap that could be avoided in the cutting process and therefore money savings. Although the problem is of great interest, no satisfactory algorithm has been found that can be applied to all the possible situations. This paper models a Hybrid Intelligent Packing System (HIPS) by integrating Artificial Neural Networks (ANNs), Artificial Intelligence (AI), and Operations Research (OR) approaches for solving the packing problem. The HIPS consists of two main modules, an intelligent generator module and a tester module. The intelligent generator module has two components: (i) a rough assignment module and (ii) a packing module. The rough assignment module utilizes the expert system and rules concerning cutting restrictions and allocation goals in order to generate many possible patterns. The packing module is an ANN that packs the generated patterns and performs post-solution adjustments. The tester module, which consists of a mathematical programming model, selects the sets of patterns that will result in a minimum amount of scrap.
[ "cutting and packing", "parallel processing", "data driven", "connectionist", "extensional programming" ]
[ "R", "M", "U", "U", "M" ]
-R1RNbg
Distributed Scheduling and Resource Allocation for Cognitive OFDMA Radios
Scheduling spectrum access and allocating power and rate resources are tasks affecting critically the performance of wireless cognitive radio (CR) networks. The present contribution develops a primal-dual optimization framework to schedule any-to-any CR communications based on orthogonal frequency division multiple access and allocate power so as to maximize the weighted average sum-rate of all users. Fairness is ensured among CR communicators and possible hierarchies are respected by guaranteeing minimum rate requirements for primary users while allowing secondary users to access the spectrum opportunistically. The framework leads to an iterative channel-adaptive distributed algorithm whereby nodes rely only on local information exchanges with their neighbors to attain global optimality. Simulations confirm that the distributed online algorithm does not require knowledge of the underlying fading channel distribution and converges to the optimum almost surely from any initialization.
[ "resource allocation", "cognitive radios", "quality of service", "distributed online implementation" ]
[ "P", "P", "M", "M" ]
-Ka129C
physically based hydraulic erosion simulation on graphics processing unit
Visual simulation of natural erosion on terrains has always been a fascinating research topic in the field of computer graphics. While there are many algorithms already developed to improve the visual quality of terrain, the recent simulation methods revolve around physically-based hydraulic erosion because it can generate realistic natural-looking terrains. However, many of such algorithms were tested only on low resolution terrains. When simulated on a higher resolution terrain, most of the current algorithms become computationally expensive. This is why in many applications today, terrains are generated off-line and loaded during the application runtime. This method restricts the number of terrains which can be stored if there is a limitation on storage capacity. Recently, graphics hardware has evolved into an indispensable tool in improving the speed of computation. This has motivated us to develop an erosion algorithm to map to graphics hardware for faster terrain generation. In this paper, we propose a fast and efficient hydraulic erosion procedural technique that utilizes the GPUs powerful computation capability in order to generate high resolution erosion on terrains. Our method is based on the Newtonian physics approach that is implemented on a two-dimensional data structure which stores height fields, water amount, and dissolved sediment and water velocities. We also present a comprehensive comparison between the CPU and GPU implementations together with the visual results and the statistics on simulation time taken.
[ "hydraulic erosion", "visual simulation", "terrain", "physically based modeling", "natural phenomena" ]
[ "P", "P", "P", "M", "M" ]
4Pjp6Q5
novel immune-based framework for securing ad hoc networks
One of the main security issues in mobile ad hoc networks (MANETs) is a malicious node that can falsify a route advertisement, overwhelm traffic without forwarding it, help to forward corrupted data and inject false or uncompleted information, and many other security problems. Mapping immune system mechanisms to networking security is the main objective of this paper which may significantly contribute in securing MANETs. In a step for providing secured and reliable broadband services, formal specification logic along with a novel immuneinspired security framework (I 2 MANETs) are introduced. The different immune components are synchronized with the framework through an agent that has the ability to replicate, monitor, detect, classify, and block/isolate the corrupted packets and/or nodes in a federated domain. The framework functions as the Human Immune System in first response, second response, adaptability, distributability, and survivability and other immune features and properties. Interoperability with different routing protocols is considered. The framework has been implemented in a real environment. Desired and achieved results are presented.
[ "security", "manets", "specification logic", "mobile agent" ]
[ "P", "P", "P", "R" ]
37fr&dF
Slabpose columnsort: A new oblivious algorithm for out-of-core sorting on distributed-memory clusters
Our goal is to develop a robust out-of-core sorting program for a distributed-memory cluster. The literature contains two dominant paradigms for out-of-core sorting algorithms: merging-based and partitioning-based. We explore a third paradigm, that of oblivious algorithms. Unlike the two dominant paradigms, oblivious algorithms do not depend on the input keys and therefore lead to predetermined I/O and communication patterns in an out-of-core setting. Predetermined I/O and communication patterns facilitate overlapping I/O, communication, and computation for efficient implementation. We have developed several out-of-core sorting programs using the paradigm of oblivious algorithms. Our baseline implementation, 3-pass columnsort, was based on Leighton's columnsort algorithm. Though efficient in terms of I/O and communication, 3-pass columnsort has a restriction on the maximum problem size. As our first effort toward relaxing this restriction, we developed two implementations: subblock columnsort and M-columnsort. Both of these implementations incur substantial performance costs: subblock columnsort performs additional disk I/O, and M-columnsort needs substantial amounts of extra communication and computation. In this paper we present slabpose columnsort, a new oblivious algorithm that we have designed explicitly for the out-of-core setting. Slabpose columnsort relaxes the problem-size restriction at no extra I/O or communication cost. Experimental evidence on a Beowulf cluster shows that unlike subblock columnsort and M-columnsort, slabpose columnsort runs almost as fast as 3-pass columnsort. To the best of our knowledge, our implementations are the first out-of-core multiprocessor sorting algorithms that make no assumptions about the keys and produce output that is perfectly load balanced and in the striped order assumed by the Parallel Disk Model.
[ "columnsort", "oblivious algorithms", "out-of-core", "distributed-memory cluster", "parallel sorting" ]
[ "P", "P", "P", "P", "R" ]
XhaaC6z
A real-time kinematics on the translational crawl motion of a quadruped robot
It is known that the kinematics of a quadruped robot is complex due to its topology and the redundant actuation in the robot. However, it is fundamental to compute the inverse and direct kinematics for the sophisticated control of the robot in real-time. In this paper, the translational crawl gait of a quadruped robot is introduced and the approach to find the solution of the kinematics for such a crawl motion is proposed. Since the resulting kinematics is simplified, the formulation can be used for the real-time control of the robot. The results of simulation and experiment shows that the present method is feasible and efficient.
[ "real-time kinematics", "quadruped robot", "translational crawl gait", "crawl velocity", "joint position", "joint velocity", "trajectory of center-of-gravity" ]
[ "P", "P", "P", "M", "U", "U", "M" ]
1RhrLGM
Geographical classification of olive oils by the application of CART and SVM to their FT-IR
This paper reports the application of Fourier-transform infrared (FT-IR) spectroscopy to the geographical classification of extra virgin olive oils. Two chemometrical techniques, classification and regression trees (CART) and support vector machines (SVM) based on the Gaussian kernel and the recently introduced Euclidean distance-based Pearson VII Universal Kernel (PUK), were applied to discriminate between Italian and non-Italian and between Ligurian and non-Ligurian olive oils. The PUK is applied in literature with success on regression problems. In this paper the mapping power of this universal kernel for classification was investigated. In this study it was observed that SVM performed better than CART. SVM based on the PUK provide models with a high selectivity and sensitivity (thus a better accuracy) as compared to those obtained using the Gaussian kernel. The wave numbers selected in the classification trees were interpreted demonstrating that the trees were chemically justified. This study also shows that FT-IR spectroscopy associated with SVM and CART can be used to correctly discriminate between various origins of olive oils, demonstrating that the combination of techniques might be a powerful tool for supporting the claimed origin of olive oils. Copyright (c) 2007 John Wiley & Sons, Ltd.
[ "olive oil", "ft-ir", "classification and regression trees", "support vector machines" ]
[ "P", "P", "P", "P" ]
4jQCJ4p
ARFNNs under Different Types SVR for Identification of Nonlinear Magneto-Rheological Damper Systems with Outliers
This paper demonstrates different types support vector regression (SVR) for annealing robust fuzzy neural networks (ARFNNs) to identification of nonlinear magneto-rheological (MR) damper with outliers. A SVR has the good performances to determine the number of rule in the simplified fuzzy inference system and initial weights for the fuzzy neural networks. In this paper, we independently proposed two different types SVR for the ARFNNs. Hence, a combination model that fuses simplified fuzzy inference system, SVR and radial basis function networks is used. Based on these initial structures, and then annealing robust learning algorithm (ARLA) can be used effectively to adjust the parameters of structures. Simulation results show the superiority of the proposed method with the different types SVR for the nonlinear MR damper systems with outliers.
[ "magneto-rheological damper", "support vector regression", "fuzzy neural networks", "annealing robust learning algorithm" ]
[ "P", "P", "P", "P" ]
-2ZoWbA
Fuzzy linear regression model based on fuzzy scalar product
The new concept and method of imposing imprecise (fuzzy) input and output data upon the conventional linear regression model is proposed in this paper. We introduce the fuzzy scalar (inner) product to formulate the fuzzy linear regression model. In order to invoke the conventional approach of linear regression analysis for real-valued data, we transact the alpha-level linear regression models of the fuzzy linear regression model. We construct the membership functions of fuzzy least squares estimators via the form of "Resolution Identity" which is a well-known formula in fuzzy sets theory. In order to obtain the membership value of any given least squares estimate taken from the fuzzy least squares estimator, we transform the original problem into the optimization problems. We also provide two computational procedures to solve the optimization problems.
[ "fuzzy linear regression model", "fuzzy scalar (inner) product", "least squares estimator", "optimization", "fuzzy number" ]
[ "P", "P", "P", "P", "M" ]
4K89ngB
Relating torque and slip in an odometric model for an autonomous agricultural vehicle
This paper describes a method of considering the slip that is experienced by the wheels of an agricultural autonomous guided vehicle such that the accuracy of dead-reckoning navigation may be improved. Traction models for off-road locomotion are reviewed. Using experimental data from an agricultural AGV, a simplified form suitable for vehicle navigation is derived. This simplified model relates measurements of the torques applied to the wheels with wheel slip, and is used as the basis of an observation model for odometric sensor data in the vehicle's extended Kalman filter (EKF) navigation system. The slip model parameters are included as states in the vehicle EKF so that the vehicle may adapt to changing surface properties. Results using real field data and a simulation of the vehicle EKF show that positional accuracy can be increased by a slip-aware odometric model, and that when used as part of a multi-sensor navigation system, the consistency of the EKF state estimator is improved.
[ "slip", "navigation", "traction", "kalman filter", "odometry" ]
[ "P", "P", "P", "P", "U" ]