abstract
stringlengths 8
9.19k
| authors
stringlengths 9
1.96k
| title
stringlengths 8
367
| __index_level_0__
int64 13
1,000k
|
---|---|---|---|
We give an approach for using flow information from a system of wells to characterize hydrologic properties of an aquifer. In particular, we consider experiments where an impulse of tracer fluid is injected along with the water at the input wells and its concentration is recorded over time at the uptake wells. We focus on characterizing the spatially varying permeability field, which is a key attribute of the aquifer for determining flow paths and rates for a given flow experiment. As is standard for estimation from such flow data, we use complicated subsurface flow code that simulates the fluid flow through the aquifer for a particular well configuration and aquifer specification, in particular the permeability field over a grid. The solution to this ill-posed problem requires that some regularity conditions be imposed on the permeability field. Typically, this regularity is accomplished by specifying a stationary Gaussian process model for the permeability field. Here we use an intrinsically stationary ... | ['Herbert K. H. Lee', 'David Higdon', 'Zhuoxin Bi', 'Marco A. R. Ferreira', 'Mike West'] | Markov Random Field Models for High-Dimensional Parameters in Simulations of Fluid Flow in Porous Media | 85,340 |
Parallel Procedure Based on the Swarm Intelligence for Solving the Two-Dimensional Inverse Problem of Binary Alloy Solidification | ['Edyta Hetmaniok', 'Damian Słota', 'Adam Zielonka'] | Parallel Procedure Based on the Swarm Intelligence for Solving the Two-Dimensional Inverse Problem of Binary Alloy Solidification | 824,272 |
Proactive records management is often described as a prerequisite for a well-functioning public administration that is efficient, legally secure and democratic. In the context of e-government, official information is seen as a valuable asset, which is why technical solutions are developed to improve accessibility and reusability. Yet how to 'capture' and preserve the information is still unclear, and adaptations of routines which have originated in a paper based administration to practices suitable for managing digital records are often lacking. This risks impeding on the work of public agencies, their services toward citizens, and the goals of e-government. This paper uses current plans for developing a national e-archive service in Sweden as a case, applying literary warrant and the records continuum model to discuss how archives management can support the goals of e-government and facilitate proactivity. A special focus is placed on 'capture' as a vital part of holistic recordkeeping. The result shows that despite regulations and ambitions supporting proactivity, 'capture' is not emphasized as a necessity for using, sharing and preserving official information. This could create archives that are incomplete, and risk contributing to a decline in governmental transparency and openness. | ['Ann-Sofie Klareld'] | Proactivity Postponed? 'Capturing' Records Created in the Context of E-government --- A Literary Warrant Analysis of the Plans for a National e-archive Service in Sweden | 666,524 |
We develop an auction-based algorithm for joint allocation of resources, namely power profiles at the source and relay nodes and subcarrier assignment profile for multiple users amplify and forward (AF) orthogonal frequency division multiple access (OFDMA) system. The proposed algorithm is based on sequential single item auction, where each user submits a bid based on either the marginal increase or the relative marginal increase of the data rate after using that subcarrier with optimal power profiles at the source and relay nodes. The first bidding strategy maximizes the sum data rate, whereas the second bidding strategy maximizes the fairness index. In both cases, the subcarrier is assigned to the user who submits the highest bid. The algorithm proceeds in a sequential fashion until all subcarriers are assigned. The system throughput and fairness indices are used to evaluate the performance of the proposed algorithm. Numerical results are used to show the merits of the proposed algorithm. | ['Hanan Al-Tous', 'Imad Barhumi'] | Auction framework for resource allocation in AF-OFDMA systems | 929,754 |
This paper describes the FRDC machine translation system for the NTCIR-9 PatentMT. The FRDC system JIANZHEN is a hierarchical phrase-based (HPB) translation system. We participated in all the three subtasks, i.e., Chinese to English, Japanese to English and English to Japanese. In this paper, we introduce a novel paraphrasing mechanism to handle a certain kind of Chinese sentences whose syntactic component are far separated. The paraphrasing approach based on the manual templates moves far-separated syntactic components closer so that the translation could become more acceptable. In addition, we single parentheses out for special treatment for all the three languages. | ['Zhongguang Zheng', 'Naisheng Ge', 'Yao Meng', 'Hao Yu'] | HPB SMT of FRDC Assisted by Paraphrasing for the NTCIR-9 PatentMT. | 687,349 |
Consider an aggregate arrival process A^N obtained by multiplexing N On-Off sources with exponential Off periods of rate \lambda and subexponential On periods \tau^{on}. For this process its activity period I^N satisfies \[ \Pr[I^N>t]\sim (1+\lambda \expect \tau^{on})^{N-1} \Pr[\tau^{on}>t] \;\; as \;\;t \rightarrow \infty, \] for all sufficiently small \lambda.When N goes to infinity, with \lambda N\rightarrow \Lambda, A^N approaches an M/G/\infty type process, for which the activity period I^\infty, or equivalently a busy period of an M/G/\infty queue with subexponential service requirement \tau^{on}, satisfies \Pr[I^\infty>t]\sim e^{\Lambda \expect \tau^{on}} \Pr[\tau^{on}>t] as t \rightarrow \infty.For a simple subexponential On-Off fluid flow queue we establish a precise asymptotic relation between the Palm queue distribution and the time average queue distribution. Further, a queueing system in which one On-Off source, whose On period belongs to a subclass of subexponential distributions, is multiplexed with independent exponential sources with aggregate expected rate \expect e_t, is shown to be asymptotically equivalent to the same queueing system with the exponential arrival processes being replaced by their total mean value \expect e_t.For a fluid queue with the limiting M/G/\infty arrivals we obtain a tight asymptotic lower bound for large buffer probabilities. Based on this bound, we suggest a computationally efficient approximation for the case of finitely many subexponential On-Off sources. Accuracy of this approximation is verified with extensive simulation experiments. | ['Predrag R. Jelenkovic', 'Aurel A. Lazar'] | Multiplexing On-Off Sources with Subexponential On Periods: Part I | 258,355 |
This paper investigates the sensitivity of continuous-time (CT) delta-sigma analog-to-digital converters (ADCs), candidate architectures for multi-standard and software-defined radio receivers, to feedback pulse-width jitter (PWJ) in presence of blocker signals received at the ADC input. A comparison between delta-sigma modulators with feedforward (FF) and feedback (FB) loop filter structures in terms of robustness to digital-to-analog converter (DAC) PWJ, in presence of blockers, is performed. Analysis and discussions developed in the paper are verified by CT simulations in Matlab/Simulink® and simulations results show good agreement with the theoretical expectations. It is shown that the PWJ induced errors due to out-of-band (OOB) blockers in the feedback path dominate the total in-band noise power (IBN) in FF delta-sigma structures and can cause a reduction in the achievable dynamic range that can be as large as 12 dB (2 bits of resolution) in case of using a non-return-to-zero DAC waveform. On the other hand, for same blocker levels at the modulator input, PWJ induced errors caused by OOB blockers have negligible contribution to the IBN in FB structures, owing to their stronger low-pass filtering characteristic and hence higher attenuation of OOB blockers. | ['Ramy Saad', 'Sebastian Hoyos'] | Sensitivity analysis of pulse-width jitter induced noise in continuous-time delta-sigma modulators to out-of-band blockers in wireless receivers | 100,825 |
In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU). | ['N. Wang', 'Dit-Yan Yeung'] | Learning a Deep Compact Image Representation for Visual Tracking | 1,267 |
A technique is presented for incrementally updating solutions to both union and intersection data-flow problems in response to program edits and transformations. For generality, the technique is based on the iterative approach to computing data-flow information. The authors show that for both union and intersection problems, some changes can be incrementally incorporated immediately into the data-flow sets while others are handled by a two-phase approach. The first phase updates the data-flow sets to overestimate the effect of the program change, enabling the second phase to incrementally update the affected data-flow sets to reflect the actual program change. An important problem that is addressed is the computation of the data-flow changes that need to be propagated throughout a program, based on different local code changes. The technique is compared to other approaches to incremental data-flow analysis. > | ['Lori L. Pollock', 'Mary Lou Soffa'] | An incremental version of iterative data flow analysis | 23,805 |
Background#R##N#Natural human languages show a power law behaviour in which word frequency (in any large enough corpus) is inversely proportional to word rank - Zipf’s law. We have therefore asked whether similar power law behaviours could be seen in data from electronic patient records. | ['Leila R Kalankesh', 'John P. New', 'Patricia G. Baker', 'Andy Brass'] | The languages of health in general practice electronic patient records: a Zipf’s law analysis | 465,778 |
Visual conductor | ['Jakub Segen'] | Visual conductor | 672,232 |
This paper addresses the problem of mapping an application, which is highly dynamic in the future, onto a heterogeneous multiprocessor platform in an energy efficient way. A two-phase scheduling method is used for that purpose. By exploring the Pareto curves and scenarios generated at design time, the run-time scheduler can easily find a good scheduling at a very low overhead, satisfying the system constraints and minimizing the energy consumption. A real-life example from a 3D quality of service kernel is used to show the effectiveness of our method. | ['Peng Yang', 'Paul Marchal', 'Chun Wong', 'Stefaan Himpe', 'Francky Catthoor', 'Patrick David', 'Johan Vounckx', 'Rudy Lauwereins'] | Managing dynamic concurrent tasks in embedded real-time multimedia systems | 102,964 |
Abstract There is nearly no conference on graphics, multimedia, and user interfaces that does not include a section on constraint-based graphics; on the other hand most conferences on constraint processing favour applications in graphics. The present work compiles numerous papers on constraint-based approaches to computer-aided design, graphics, layout configuration, and user interfaces in general. In order to keep this study of bibliographical points up-to-date the authors appreciate any comment and update information. | ['Walter Hower', 'Winfried Graf'] | A bibliographical survey of constraint-based approaches to CAD, graphics, layout, visualization, and related topics | 314,384 |
We describe a novel modular learning strategy for the detection of a target signal of interest in a nonstationary environment, which is motivated by the information preservation rule. The strategy makes no assumptions on the environment. It incorporates three functional blocks: (1) time-frequency analysis, (2) feature extraction, (3) pattern classification, the delineations of which are guided by the information preservation rule. The time-frequency analysis, implemented using the Wigner-Ville distribution, transforms the incoming received signal into a time-frequency image that accounts for the time-varying nature of the received signal's spectral content. This image provides a common input to a pair of channels, one of which is adaptively matched to the interference acting alone, and the other is adaptively matched to the target signal plus interference. Each channel of the receiver consists of a principal components analyser (for feature extraction) followed by a multilayer perceptron (for feature classification), which are implemented using self-organized and supervised forms of learning in feedforward neural networks, respectively. Experimental results, based on real-life radar data, are preserved to demonstrate the superior performance of the new detection strategy over a conventional detector using constant false-alarm rate (CFAR) processing. The data used in the experiment pertain to an ocean environment, representing radar returns from small ice targets buried in sea clutter; they were collected with an instrument-quality coherent radar and properly ground-truthed. | ['Simon Haykin', 'Tarun Kumar Bhattacharya'] | Modular learning strategy for signal detection in a nonstationary environment | 111,313 |
A Linear Interpolation Algorithm for Spectral Filter Array Demosaicking | ['Congcong Wang', 'Xingbo Wang', 'Jon Yngve Hardeberg'] | A Linear Interpolation Algorithm for Spectral Filter Array Demosaicking | 567,884 |
A Vision-Based Three-Tiered Path Planning and Collision avoidance Scheme for miniature Air Vehicles. | ['Huili Yu', 'Randal W. Beard'] | A Vision-Based Three-Tiered Path Planning and Collision avoidance Scheme for miniature Air Vehicles. | 787,709 |
Detecting disordered breathing and limb movement using in-bed force sensors. | ['Daniel Waltisberg', 'Oliver Amft', 'Daniel Brunner', 'Gerhard Troester'] | Detecting disordered breathing and limb movement using in-bed force sensors. | 699,801 |
In support of the condition-based maintenance (CBM) philosophy, a theoretical framework and algorithmic methodology for obtaining useful diagnostic and prognostic data from small-scale electromechanical systems is developed. The methods are based on vibration and modal analyses of the physical components. To illustrate the concept of the derived process, an example circuit card is considered. Models are created using finite-element analysis (FEA) techniques and analyzed to determine fundamental mode shapes and vibration frequencies. Simulations are initially conducted on an unadulterated benchmark model. Various fault conditions and cracks that are typical of the modeled circuit card are inserted. Additionally, cracks of increasing length are introduced to simulate dynamic crack growth, representing a deteriorating system. The simulation results yielded vibration modes characteristic of undamaged, faulty, and deteriorating systems. The altered natural frequency response signatures derived from each test signal vector for systems with existing faults and under progressive cracking are compared with the normal operation benchmark signature. An O ( N middotlog N ) correlation technique is utilized for discrimination. Application of the developed techniques proved useful in characterizing system health by identifying both faulty and deteriorating conditions of the example component. | ['Andrew Scott'] | Characterizing System Health Using Modal Analysis | 493,539 |
The operational matrices of left Caputo fractional derivative, right Caputo fractional derivative, and Riemann–Liouville fractional integral, for shifted Chebyshev polynomials, are presented and derived. We propose an accurate and efficient spectral algorithm for the numerical solution of the two-sided space–time Caputo fractional-order telegraph equation with three types of non-homogeneous boundary conditions, namely, Dirichlet, Robin, and non-local conditions. The proposed algorithm is based on shifted Chebyshev tau technique combined with the derived shifted Chebyshev operational matrices. We focus primarily on implementing the novel algorithm both in temporal and spatial discretizations. This algorithm reduces the problem to a system of algebraic equations greatly simplifying the problem. This system can be solved by any standard iteration method. For confirming the efficiency and accuracy of the proposed scheme, we introduce some numerical examples with their approximate solutions and compare our results with those achieved using other methods. | ['A. H. Bhrawy', 'M.A. Zaky', 'José A. Tenreiro Machado'] | Numerical Solution of the Two-Sided Space–Time Fractional Telegraph Equation Via Chebyshev Tau Approximation | 637,588 |
A new chief executive officer and executive director of ACM | ['Alexander L. Wolf'] | A new chief executive officer and executive director of ACM | 629,206 |
After being widely studied in theory, physical layer security schemes are getting closer to enter the consumer market. Still, a thorough practical analysis of their resilience against attacks is missing. In this work, we use software-defined radios to implement such a physical layer security scheme, namely, orthogonal blinding. To this end, we use orthogonal frequency-division multiplexing (OFDM) as a physical layer, similarly to WiFi. In orthogonal blinding, a multi-antenna transmitter overlays the data it transmits with noise in such a way that every node except the intended receiver is disturbed by the noise. Still, our known-plaintext attack can extract the data signal at an eavesdropper by means of an adaptive filter trained using a few known data symbols. Our demonstrator illustrates the iterative training process at the symbol level, thus showing the practicability of the attack. | ['Matthias Schulz', 'Adrian Loch', 'Matthias Hollick'] | DEMO: Demonstrating Practical Known-Plaintext Attacks against Physical Layer Security in Wireless MIMO Systems | 823,453 |
New parallel algorithms for solving initial and boundary value problems for linear ODEs and their systems on large parallel MIMD computers are proposed. The proposed algorithms are based on dividing a problem in similar so-called local problems, which can be solved independently and in parallel using any known (sequential or parallel) method. The solution is then built as a linear combination of the local solutions. The recurrence relationships (for the case of non-homogeneo us equations) and explicit expressions (for the case of homogeneous equations) for the coefficients of that linear com bination are obtained. Three elementary examples, illustrating the idea of the proposed approach, are given. The majority of parallel algorithms were developed for solving algebraic problems and boundary value problems for partial differential equations (PDEs). With the exception of the parallelization of methods of the Runge-Kutta type and their modi fications, almost no attention was paid to the development of parallel algorithms for ordinary differential equations (ODEs), and the available literature reflects this state [l]-[5]. However, not every parallel algorithm for solving PDEs is applicable for solving ODEs. Some new parallel algorithms for solving initial and boundary value problems for linear ODEs and their systems are described and illustrated in this paper. The proposed approach is based on two main ideas. 1. The first idea is that, in fact, we always deal with finite intervals when we look for the numerical solution of any initial value problem. Even when the given interval (in the formulation of a problem) is infinite, we can obtain the numerical solution of a problem only for the finite subinterval of the given original infinite interval. So it seems to be natural to apply numerical methods directly to the finite (sub)interval of the researcher's interest. | ['Igor Podlubny'] | PARALLEL ALGORITHMS FOR INITIAL AND BOUNDARY VALUE PROBLEMS FOR LINEAR ORDINARY DIFFERENTIAL EQUATIONS AND THEIR SYSTEMS | 669,188 |
In dynamic MRI, spatio-temporal resolution is a very important issue. Recently, compressed sensing approach has become a highly attracted imaging technique since it enables accelerated acquistion without aliasing artifacts. Our group has proposed an l 1 -norm based compressed sensing dynamic MRI called k-t FOCUSS, which outperforms existing methods. However, it is known that the restrictive conditions for l 1 exact reconstruction usually cost more measurements than l 0 minimization. In this paper, we adopts a sparse Bayesian learning approach to improve k-t FOCUSS and achieve l 0 solution. We demonstrated the improved image quality using in vivo cardiac cine imaging. | ['Hong Jung', 'Jong Chul Ye'] | A sparse Bayesian learning for highly accelerated dynamic MRI | 448,514 |
We propose a decoding algorithm for a class of convolutional codes called skew BCH convolutional codes. These are convolutional codes of designed Hamming distance endowed with a cyclic structure yielding a left ideal of a non-commutative ring (a quotient of a skew polynomial ring). In this setting, right and left division algorithms exist, so our algorithm follows the guidelines of the Sugiyama's procedure for finding the error locator and error evaluator polynomials for BCH block codes. | ['José Gómez-Torrecillas', 'F. J. Lobillo', 'Gabriel Navarro'] | A Sugiyama-like decoding algorithm for convolutional codes | 863,413 |
The time-frequency ARMA (TFARMA) model is introduced as a time-varying ARMA model for nonstationary random processes that is formulated in terms of time shifts and frequency (Doppler) shifts. We present Akaike and minimum description length information criteria for the practically important task of selecting the TFARMA model orders. Because the estimated inverse filter used by the resulting order selection procedures is not guaranteed to be stable, we propose an iterative stabilization algorithm that is based on the concepts of instantaneous roots and root reflection/shrinkage. The performance of the proposed order selection and stabilization techniques is assessed through simulation. | ['Michael Jachan', 'Gerald Matz', 'Franz Hlawatsch'] | TFARMA models: order estimation and stabilization | 15,250 |
Clustering plays an important role in VLSI physical design. In this paper, we present a new structure and connectivity based clustering algorithm. The proposed clustering algorithm emphasizes capturing natural circuit clusters, i.e., highly interconnected cell groups. We apply the proposed clustering algorithm to 2-way and k-way partitionings on ISPD98 benchmark suite as stated in C. J. Alpert (1998), and 2-way partitioning to part of ISPD2005 benchmark suite based in G.-J. Nam et al. (2005). The experimental results show that the proposed clustering algorithm can maintain the partitioning solution qualities while reducing the sizes of large scale circuits. | ['Jianhua Li', 'Laleh Behjat', 'Blair Schiffner'] | A structure based clustering algorithm with applications to VLSI physical design | 500,191 |
Although 2D-based face recognition methods have made great progress in the past decades, there are also some unsolved problems such as PIE. Recently, more and more researchers have focused on 3D-based face recognition approaches. Among these techniques, facial feature point localization plays an important role in representing and matching 3D faces. In this paper, we present a novel feature point localization method on 3D faces combining global shape model and local surface model. Bezier surface is introduced to represent local structure of different feature points and global shape model is utilized to constrain the local search result. Experimental results based on comparison of our method and curvature analysis show the feasibility and efficiency of the new idea. | ['Peng Guan', 'Yaoliang Yu', 'Liming Zhang'] | A Novel Facial Feature Point Localization Method on 3D Faces | 186,404 |
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. | ['Markus Eisenbach', 'Jeff Larkin', 'Justin Lutjens', 'Steven Rennich', 'James H. Rogers'] | GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials | 790,847 |
In this note we estimate the asymptotic rates for the L"2-error decay and the storage cost when approximating 2@p-periodic, d-variate functions from isotropic and mixed Sobolev classes by the recent hierarchical tensor format as introduced by Hackbusch and Kuhn. To this end, we survey some results on bilinear approximation due to Temlyakov. The approach taken in this paper improves and generalizes recent results of Griebel and Harbrecht for the bi-variate case. | ['Reinhold Schneider', 'André Uschmajew'] | Approximation rates for the hierarchical tensor format in periodic Sobolev spaces | 424,623 |
The Recognition of Human Action Using Silhouette Histogram. | ['Chaur-Heh Hsieh', 'Ping Sheng Huang', 'Ming-Da Tang'] | The Recognition of Human Action Using Silhouette Histogram. | 746,251 |
Visualization provides a powerful means for data analysis. To be most effective, visual analytics tools must support the fluent and flexible use of visualizations at fast rates. This becomes increasingly difficult with the increasing size of real-world datasets. First, large databases make interactivity more difficult as a query across the entire data can be slow. Second, any attempt to show all items from a dataset will overload the visualization, resulting in clutter. | ['Jarek Gryz', 'Parke Godfrey', 'Piotr Lasek', 'Nasim Razavi'] | Skydive: An interactive data visualization engine | 559,057 |
This paper presents an architecture for a high-speed carry select adder with very long bit lengths utilizing a conflict-free bypass scheme. The proposed scheme has almost half the number of transistors and is faster than a conventional carry select adder. A comparative study is also made between the proposed adder and a Manchester carry chain adder which shows that the proposed scheme has the same transistor count, without suffering any performance degradation, compared to the Manchester carry chain adder. | ['M. Shamanna', 'Sterling R. Whitaker'] | A carry select adder with conflict free bypass circuit | 191,878 |
The use of hybrid free-space optical (FSO)/radio-frequency (RF) links to provide robust, high-throughput communications, fixed infrastructure links, and their associated networks have been thoroughly investigated for both commercial and military applications. The extension of this paradigm to mobile, long-range networks has long been a desire by the military communications community for multigigabit mobile backbone networks. The FSO communications subsystem has historically been the primary limitation. The challenge has been addressing the compensation of propagation effects and dynamic range of the received optical signal. This paper will address the various technologies required to compensate for the effects referenced above. We will outline the effects FSO and RF links experience and how we overcome these degradations. Results from field experiments conducted, including those from the Air Force Research Laboratory Integrated RF/Optical Networked Tactical Targeting Networking Technologies (IRON-T2) program, will be presented. | ['Larry B. Stotts', 'Larry C. Andrews', 'Paul C. Cherry', 'James J. Foshee', 'Paul Kolodzy', 'William K. McIntire', 'Malcolm J. Northcott', 'Ronald L. Phillips', 'H.A. Pike', 'Brian Stadler', 'David W. Young'] | Hybrid Optical RF Airborne Communications | 307,176 |
We propose a stochastic algorithm for the global optimization of chance constrained problems. We assume that the probability measure with which the constraints are evaluated is known only through its moments. The algorithm proceeds in two phases. In the first phase the probability distribution is (coarsely) discretized and solved to global optimality using a stochastic algorithm. We only assume that the stochastic algorithm exhibits a weak* convergence to a probability measure assigning all its mass to the discretized problem. A diffusion process is derived that has this convergence property. In the second phase, the discretization is improved by solving another nonlinear programming problem. It is shown that the algorithm converges to the solution of the original problem. We discuss the numerical performance of the algorithm and its application to process design. | ['Panos Parpas', 'Berç Rustem', 'Efstratios N. Pistikopoulos'] | Global optimization of robust chance constrained problems | 62,461 |
We establish the relation between two language recognition models that use counters and operate in real-time: Greibach's partially blind machines operating in real time (RT-PBLIND), which recognize Petri Net languages, and the consensually regular (CREG) language model of the authors. The latter is based on synchronized computational threads of a finite automaton, where at each step one thread acts as the leader and all other threads as followers. We introduce two new normal forms of RT-PBLIND machines (and Petri Nets), such that counter operations are scheduled and rarefied, and transitions are quasi-deterministic, i.e., the finite automaton obtained by eliminating counter moves is deterministic. We prove that the CREG family can simulate any normalized RT-PBLIND machine, but it also contains the non-RT-PBLIND language {a n b n |n>1} ⁎ { a n b n | n > 1 } ⁎ . | ['Stefano Crespi Reghizzi', 'Pierluigi San Pietro'] | Counter machines, Petri Nets, and consensual computation ☆ | 579,721 |
Verification of BPMN Model Functional Completeness by using the Topological Functioning Model | ['Erika Nazaruka', 'Viktorija Ovchinnikova', 'Gundars Alksnis', 'Uldis Sukovskis'] | Verification of BPMN Model Functional Completeness by using the Topological Functioning Model | 728,232 |
Data mining meta-optimization aims to find an optimal data mining model which has the best performance (e.g., highest prediction accuracy) for a specific dataset. The optimization process usually involves evaluating a series of configurations of parameter values for many algorithms, which can be very time-consuming. We propose an agent-based framework to power the meta-optimization through collaboration of computing resources. This framework can evaluate the parameter settings for a list of algorithms in parallel via a multiagent system and therefore can reduce computational time. We have applied the framework to the construction of prediction models for human biomechanics data. The results show that the framework can significantly improve the accuracy of data mining models and the efficiency of data mining meta-optimization. | ['Xiong Liu', 'Kaizhi Tang', 'John R. Buhrman', 'Huaining Cheng'] | An agent-based framework for collaborative data mining optimization | 43,148 |
We present a unified approach to noise removal, image enhancement, and shape recovery in images. The underlying approach relies on the level set formulation of the curve and surface motion, which leads to a class of PDE-based algorithms. Beginning with an image, the first stage of this approach removes noise and enhances the image by evolving the image under flow controlled by min/max curvature and by the mean curvature. This stage is applicable to both salt-and-pepper grey-scale noise and full-image continuous noise present in black and white images, grey-scale images, texture images, and color images. The noise removal/enhancement schemes applied in this stage contain only one enhancement parameter, which in most cases is automatically chosen. The other key advantage of our approach is that a stopping criteria is automatically picked from the image; continued application of the scheme produces no further change. The second stage of our approach is the shape recovery of a desired object; we again exploit the level set approach to evolve an initial curve/surface toward the desired boundary, driven by an image-dependent speed function that automatically stops at the desired boundary. | ['Ravikanth Malladi', 'James A. Sethian'] | A unified approach to noise removal, image enhancement, and shape recovery | 387,344 |
In this paper, we propose a genetic fuzzy image filtering based on rank-ordered absolute differences (ROAD) and median of the absolute deviations from the median (MAD). The proposed method consists of three components, including fuzzy noise detection system, fuzzy switching scheme filtering, and fuzzy parameters optimization using genetic algorithms (GA) to perform efficient and effective noise removal. Our idea is to utilize MAD and ROAD as measures of noise probability of a pixel. Fuzzy inference system is used to justify the degree of which a pixel can be categorized as noisy. Based on the fuzzy inference result, the fuzzy switching scheme that adopts median filter as the main estimator is applied to the filtering. The GA training aims to find the best parameters for the fuzzy sets in the fuzzy noise detection. By the experimental results, the proposed method has successfully removed mixed impulse noise in low to medium probabilities, while keeping the uncorrupted pixels less affected by the median filtering. It also surpasses the other methods, either classical or soft computing-based approaches to impulse noise removal, in MAE and PSNR evaluations. | ['Nur Zahrati Janah', 'Baharum Baharudin'] | Mixed Impulse Fuzzy Filter Based on MAD, ROAD, and Genetic Algorithms | 38,756 |
Scenarios for Collaborative Planning of Inter-Terminal Transportation | ['Herbert Kopfer', 'Dong-Won Jang', 'Benedikt Vornhusen'] | Scenarios for Collaborative Planning of Inter-Terminal Transportation | 865,200 |
Computing partition functions, the normalizing constants of probability distributions, is often hard. Variants of importance sampling give unbiased estimates of a normalizer Z, however, unbiased estimates of the reciprocal 1/Z are harder to obtain. Unbiased estimates of 1/Z allow Markov chain Monte Carlo sampling of "doubly-intractable" distributions, such as the parameter posterior for Markov Random Fields or Exponential Random Graphs. We demonstrate how to construct unbiased estimates for 1/Z given access to black-box importance sampling estimators for Z. We adapt recent work on random series truncation and Markov chain coupling, producing estimators with lower variance and a higher percentage of positive estimates than before. Our debiasing algorithms are simple to implement, and have some theoretical and empirical advantages over existing methods. | ['Colin Wei', 'Iain Murray'] | Markov Chain Truncation for Doubly-Intractable Inference | 912,231 |
In the consensus reaching processes developed in group decision making problems we need to measure the closeness among experts' opinions in order to obtain a consensus degree. As it is known, to achieve a full and unanimous consensus is often not reachable in practice. An alternative approach is to use softer consensus measures, which reflect better all possible partial agreements, guiding the consensus process until high agreement is achieved among individuals. Consensus models based on soft consensus measures have been widely used because these measures represent better the human perception of the essence of consensus. This paper presents an overview of consensus models based on soft consensus measures, showing the pioneering and prominent papers, the main existing approaches and the new trends and challenges. | ['Enrique Herrera-Viedma', 'Francisco Javier Cabrerizo', 'Janusz Kacprzyk', 'Witold Pedrycz'] | A review of soft consensus models in a fuzzy environment | 354,228 |
This paper presents the design and the performance evaluation of a coarse-grain dynamically reconfigurable platform for network applications. The platform consists of two MicroBlaze RISC processors and a number of hardware co-processors used for the processing of the packets payload (DES encryption and Lempel-Ziv Compression). The co-processors can be connected either directly to the processors or using a shared bus. The type of the co-processors is dynamically reconfigured to meet the requirements of the network workload. The system has been implemented in the Xilinx Virtex II Pro FPGA platform and the network traces from real passive measurements have been used for performance evaluation. The use of dynamically reconfigurable co-processors for network applications shows that the performance speedup versus a static version varies from 12% to 35% in the best case and from 10% to 15% on average, depending on the variability in time and distribution of the network traffic. | ['Christoforos Kachris', 'Stamatis Vassiliadis'] | Performance Evaluation of an Adaptive FPGA for Network Applications | 520,085 |
Graphical models are widely used in argumentation to visualize relationships among propositions or arguments. The intuitive meaning of the links in the graphs is typically expressed using labels of various kinds. In this paper we introduce a general semantical framework for assigning a precise meaning to labelled argument graphs which makes them suitable for automatic evaluation. Our approach rests on the notion of explicit acceptance conditions, as first studied in Abstract Dialectical Frameworks (ADFs). The acceptance conditions used here are functions from multisets of labels to truth values. We define various Dung style semantics for argument graphs. We also introduce a pattern language for specifying acceptance functions. Moreover, we show how argument graphs can be compiled to ADFs, thus providing an automatic evaluation tool via existing ADF implementations. Finally, we also discuss complexity issues. | ['Gerhard Brewka', 'Stefan Woltran'] | GRAPPA: a semantical framework for graph-based argument processing | 755,606 |
In this paper, we present a cross-layer analytical framework to jointly investigate antenna diversity and multiuser scheduling under the generalized Nakagami fading channels. We derive a unified capacity formula for the multiuser scheduling system with different multiple-input multiple-output antenna schemes, including: 1) selective transmission/selective combining (ST/SC); 2) maximum ratio transmission/maximum ratio combining (MRT/MRC); 3) ST/MRC; and 4) space-time block codes (STBC). Our analytical results lead to the following four observations regarding the interplay of multiuser scheduling and antenna diversity. First, the higher the Nakagami fading parameter, the lower the multiuser diversity gain for all the considered antenna schemes. Second, from the standpoint of multiuser scheduling, the multiple antennas with the ST/SC method can be viewed as virtual users to amplify multiuser diversity order. Third, the boosted array gain of the MRT/MRC scheme can compensate the detrimental impact of the reduced amount of fading gain on multiuser scheduling, thereby resulting in greater capacity than the ST/SC method. Last, employing the STBC scheme together with multiuser diversity may cause capacity loss due to the reduced amount of fading gain, but without the supplement of array gain. | ['Chiung-Jang Chen', 'Li-Chun Wang'] | A unified capacity analysis for wireless systems with joint multiuser scheduling and antenna diversity in Nakagami fading channels | 159,679 |
Nonnegative matrix factorization (NMF) is a matrix factorization technique that might find meaningful latent nonnegative components. Since, however, the objective function is non-convex, the source separation performance can degrade when the iterative update of the basis matrix is stuck to a poor local minimum. Most of the research updates basis iteratively to minimize certain objective function with random initialization, although a few approaches have been proposed for the systematic initialization of the basis matrix such as the singular value decomposition. In this paper, we propose a novel basis estimation method inspired by the similarity of the bases training with the vector quantization, which is similar to Linde-Buzo-Gray algorithm. Experiments of the audio source separation showed that the proposed method outperformed the NMF using random initialization by about 1.64 dB and 1.43 dB in signal-to-distortion ratio when its target sources were speech and violin, respectively. | ['Kisoo Kwon', 'Jong Won Shin', 'In Kyu Choi', 'Hyung Yong Kim', 'Nam Soo Kim'] | Incremental approach to NMF basis estimation for audio source separation | 993,849 |
An Embedded Hardware Platform For Fungible Interfaces. | ['Avrum Hollinger', 'Joseph Thibodeau', 'Marcelo M. Wanderley'] | An Embedded Hardware Platform For Fungible Interfaces. | 785,657 |
This paper proposes a modified harmony search (MHS) algorithm with an intersect mutation operator and cellular local search for continuous function optimization problems. Instead of focusing on the intelligent tuning of the parameters during the searching process, the MHS algorithm divides all harmonies in harmony memory into a better part and a worse part according to their fitness. The novel intersect mutation operation has been developed to generate new -harmony vectors. Furthermore, a cellular local search also has been developed in MHS, that helps to improve the optimization performance by exploring a huge search space in the early run phase to avoid premature, and exploiting a small region in the later run phase to refine the final solutions. To obtain better parameter settings for the proposed MHS algorithm, the impacts of the parameters are analyzed by an orthogonal test and a range analysis method. Finally, two sets of famous benchmark functions have been used to test and evaluate the performance of the proposed MHS algorithm. Functions in these benchmark sets have different characteristics so they can give a comprehensive evaluation on the performance of MHS. The experimental results show that the proposed algorithm not only performs better than those state-of-the-art HS variants but is also competitive with other famous meta-heuristic algorithms in terms of the solution accuracy and efficiency. | ['Jin Yi', 'Liang Gao', 'Xinyu Li', 'Jie Gao'] | An efficient modified harmony search algorithm with intersect mutation operator and cellular local search for continuous function optimization problems | 606,787 |
A simplified shadow removal approach by using interim results of transformed domain GMM foreground segmentation has been developed. The approach is based on the fact that the spatial frequency distribution does not change from the backgrounds in the shadow areas. Due to employing gray level picture processing and to utilizing only low frequency components in the transform domain, the resultant shadow removal approach drastically reduces the amount of processing, compared to conventional shadow removal approaches based on pixel based color component processing. | ['Kazuki Nakagami', 'Toshiaki Shiota', 'Takao Nishitani'] | Low complexity shadow removal on foreground segmentation | 226,677 |
Fast UD factorization-based RLS online parameter identification for model-based condition monitoring of lithium-ion batteries | ['Taesic Kim', 'Yebin Wang', 'Zafer Sahinoglu', 'Toshihiro Wada', 'Satoshi Hara', 'Wei Qiao'] | Fast UD factorization-based RLS online parameter identification for model-based condition monitoring of lithium-ion batteries | 241,106 |
Previous studies have revealed that personal responsibility has an influence on outcome evaluation, although the way this influence works is still unclear. This study imitated the phenomenon of responsibility diffusion in a laboratory to examine the influence of the effect of responsibility diffusion on the processing of outcome evaluation using the event-related potential (ERP) technique. Participants of the study were required to perform the gambling task individually in the high-responsibility condition and with others in the low-responsibility scenario. Self-rating results showed that the participants felt more responsible for monetary loss and believed that they had more contributions to the monetary gains in the high-responsibility condition than in the low-responsibility situation. Both the feedback-related negativity (FRN) and the P300 were sensitive to the responsibility level, as evidenced by the enhanced amplitudes in the high-responsibility condition for both components. Further correlation analysis showed a negative correlation between FRN amplitudes and subjective rating scores (i.e., the higher the responsibility level, the larger the FRN amplitude). The results probably indicate that the FRN and P300 reflect personal responsibility processing under the social context of diffusion of responsibility. | ['Peng Li', 'Shiwei Jia', 'Tingyong Feng', 'Qiang Liu', 'Tao Suo', 'Hong Li'] | The influence of the diffusion of responsibility effect on outcome evaluations: Electrophysiological evidence from an ERP study | 121,648 |
Distance-dependent, pairwise, statistical potentials are based on the concept that the packing observed in known protein structures can be used as a reference for comparing different 3D models for a protein. Here, packing refers to the set of all pairs of atoms in the molecule. Among all methods developed to assess three-dimensional models, statistical potentials are subject both to praise for their power of discrimination, and to criticism for the weaknesses of their theoretical foundations. Classical derivations of pairwise potentials assume statistical independence of all pairs of atoms. This assumption, however, is not valid in general. We show that we can filter the list of all interactions in a protein to generate a much smaller subset of pairs that retains most of the structural information contained in proteins. The filter is based on a geometric method called alpha shapes that captures the packing in a conformation. Statistical scoring functions derived from such subsets perform as well as scoring functions derived from the set of all pairwise interactions. | ['Afra Zomorodian', 'Leonidas J. Guibas', 'Patrice Koehl'] | Geometric filtering of pairwise atomic interactions applied to the design of efficient statistical potentials | 291,779 |
Parametrizations for Families of ECM-friendly curves. | ['Alexandre Gélin', 'Thorsten Kleinjung', 'Arjen K. Lenstra'] | Parametrizations for Families of ECM-friendly curves. | 995,507 |
We describe the design and implementation of the new Live Query Statistics (LQS) feature in Microsoft SQL Server 2016. The functionality includes the display of overall query progress as well as progress of individual operators in the query execution plan. We describe the overall functionality of LQS, give usage examples and detail all areas where we had to extend the current state-of-the-art to build the complete LQS feature. Finally, we evaluate the effect these extensions have on progress estimation accuracy with a series of experiments using a large set of synthetic and real workloads. | ['Kukjin Lee', 'Arnd Christian König', 'Vivek R. Narasayya', 'Bolin Ding', 'Surajit Chaudhuri', 'Brent Ellwein', 'Alexey Eksarevskiy', 'Manbeen Kohli', 'Jacob Wyant', 'Praneeta Prakash', 'Rimma V. Nehme', 'Jiexing Li', 'Jeffrey F. Naughton'] | Operator and Query Progress Estimation in Microsoft SQL Server Live Query Statistics | 820,192 |
This paper explores the use of Binary Decision Diagrams (BDDs) in Conformant Planning. A conformant planner, called BPA, based on the BDD representation for belief sets is developed. Heuristics that fit with the BDD representation are presented and analyzed experimentally. The paper confirms the strong potential of BDDs to enhance performance of heuristic search based conformant planners. | ['Stefano Tognazzi', 'Agostino Dovier', 'Enrico Pontelli', 'Tran Cao Son'] | Exploring the Use of BDDs in Conformant Planning | 609,779 |
A technique to design a dynamic continuous controller to regulate a class of full-actuated mechanical systems with dry friction is proposed. It is shown that the control eliminates the steady-state error and is robust with respect to parameter uncertainties. A simple method to find the parameters of the controller is also proposed. Moreover, an application of this result to control a 2-DOF underactuated mechanical system with dry friction in the non-actuated joint is described. Here, the control objective is to regulate the non- actuated variable while the position and speed of the actuated joint remain bounded. Performance issues of the developed synthesis are illustrated with numerical and experimental results. | ['Roque Morán Martínez', 'Joaquin Alvarez'] | Control of Mechanical Systems with Dry Friction | 355,935 |
Fast technological advancements and little compliance with accessibility standards by Web page authors pose serious obstacles to the Web experience of the blind user. We propose a unified Web document model that enables us to create a richer browsing experience and improved navigability for blind users. The model provides an integrated view on all aspects of a Web page and is leveraged to create a multi-axial user interface. | ['Ruslan R. Fayzrahmanov', 'Max C. Göbel', 'Wolfgang Holzinger', 'Bernhard Krüpl', 'Robert Baumgartner'] | A unified ontology-based web page model for improving accessibility | 184,032 |
In this paper a further generalization of differential evolution based data classification method is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, for determining the optimal values for all free parameters of the classifier model during the training phase of the classifier. The earlier version of differential evolution classifier that applied individually optimized distance measure for each new data set to be classified is generalized here so, that instead of optimizing a single distance measure for the given data set, we take a further step by proposing an approach where distance measures are optimized individually for each feature of the data set to be classified. In particular, distance measures for each feature are selected optimally from a predefined pool of alternative distance measures. The optimal distance measures are determined by differential evolution algorithm, which is also determining the optimal values for all free parameters of the selected distance measures in parallel. After determining the optimal distance measures for each feature together with their optimal parameters, we combine all featurewisely determined distance measures to form a single total distance measure, that is to be applied for the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; A sample belongs to the class represented by the nearest prototype vector when measured with the above referred optimized total distance measure. During the training process the differential evolution algorithm determines optimally the class vectors, selects optimal distance metrics for each data feature, and determines the optimal values for the free parameters of each selected distance measure. Based on experimental results with nine well known classification benchmark data sets, the proposed approach yield a statistically significant improvement to the classification accuracy of differential evolution classifier. | ['David Koloseni', 'Jouni Lampinen', 'Pasi Luukka'] | Differential evolution based nearest prototype classifier with optimized distance measures for the features in the data sets | 305,402 |
This study examined how social networks of LIS graduates contribute to their job attainment.Graduates from three ALA-accredited programs in the Southeastern U.S. who took some or all oftheir coursework online to earn the MLIS degree participated in the study. Findings suggest thatrecent graduates of entirely online programs have social capital deficit with the absence of theirMLIS peers in their social networks. However, the results also showed that such a deficit maynot be an important concern as graduates found most of the jobs with the information providedby LIS professional contacts. Having relatively older contacts in their networks, graduatesincreased their likelihood of finding new employment after graduation. | ['Fatih Oguz'] | Social Capital Deficit in Online Learning: An Ego-Centric Approach to Occupational Attainment | 602,047 |
A new version of the compilation of higher plant mitochondrial tRNA genes (http://www.ebi.ac.uk/service ) has been obtained by means of the FastA program for similarity searching in nucleotide sequence Databases. This approach improves the previous collection, which was based on literature data analysis. The current compilation contains 158 sequences with an increase of 43 units. In this paper, some interesting features of the new entries are briefly presented. | ['Amelia Sagliano', 'Mariateresa Volpicella', 'Raffaele Gallerani', 'Luigi R. Ceci'] | A FastA based compilation of higher plant mitochondrial tRNA genes | 436,001 |
This paper presents an application of the ISIF chip (Intelligent Sensor InterFace), for conditioning a dual-axis low-g accelerometer in MEMS technology. MEMS are nowadays the standard in automotive applications (and not only), as they feature a drastic reduction in cost, area and power, while they require a more complex electronic interface with respect to traditional discrete devices. ISIF is a Platform On Chip implementation, aiming to fast prototype a wide range of automotive sensors thanks to its high configuration resources, achieved both by full analog / digital IPs trimming options and by flexible routing structures. This accelerometer implementation exploits a relevant part of ISIF hardware resources, but also requires signal processing add-ins (software emulation of digital DSP blocks) for the closed loop conditioning architecture and for performance improvement (for example temperature drift compensation). In spite the short prototyping time, the resulting system achieves good performances with respect to commercial devices, featuring a 0.9 mg/√Hz noise density with 1024 LSB/g sensitivity on the digital output over a +/- 2g FS, and an offset drift over 100°C range within 30 mg, with 2% of FS sensitivity drift. Miniboards have been developed as product prototypes, consisting of a small PCB with ISIF and accelerometer dies bonded together, firmware embedded in EEPROM and communication transceivers. | ["F. D'Ascoli", 'F. Iozzi', 'C. Marino', 'M. Melani', 'M. Tonarelli', 'Luca Fanucci', 'A. Giambastiani', 'Alessandro Rocchi', 'M. De Marinis'] | Low-g accelerometer fast prototyping for automotive applications | 541,198 |
Prediction of stock prices is an issue of interest to financial markets. Many prediction techniques have been reported in stock forecasting. Neural networks are viewed as one of the more suitable techniques. In this study, an experiment on the forecasting of the stock exchange of Thailand (SET) was conducted by using feedforward backpropagation neural networks. In the experiment, many combinations of parameters were investigated to identify the right set of parameters for the neural network models in the forecasting of SET. Several global and local factors influencing the Thai stock market were used in developing the models, including the Dow Jones index, Nikkei index, Hang Seng index, gold prices, minimum loan rate (MLR), and the exchange rates of the Thai Baht and the US dollar. Two yearspsila historical data were used to train and test the models. Three suitable neural network models identified by this research are a three layer, a four layer and a five layer neural network. The mean absolute percentage error (MAPE) of the predictions of each models were 1.26594, 1.14719 and 1.14578 respectively. | ['Suchira Chaigusin', 'Chaiyaporn Chirathamjaree', 'Judith Clayden'] | The Use of Neural Networks in the Prediction of the Stock Exchange of Thailand (SET) Index | 506,118 |
Software is evolutionary in nature. From the time a software product is defined until it is no longer used, it changes. We focus in this paper on the aspect-oriented (AO) software evolution. Although AO software engineering is the subject of ongoing research, AO software evolution has received less attention. AO programming is a mature technology that modularises the crosscutting concerns. Unfortunately, it produces new dependencies between them; restricts the evolvability of the software system. In order to cope with all types of AO program's dependencies, we converge toward a new evolution modelling approach. In our proposal, the AO source code is modelled in a more abstract and formal format as an attributed coloured graph, where the different dependencies in the software system are well defined. Then, the change requests are presented as rewritten rules on this coloured graph. We give here the details of our approach as well as its implementation. And, we provide an empirical evaluation to prove the e... | ['Hanene Cherait', 'Nora Bounour'] | Rewriting rule-based model for aspect-oriented software evolution | 941,593 |
Detection of courtesy amount block on bank checks | ['Arun Agarwal', 'Karim Hussein', 'Amar Gupta', 'Patrick S. P. Wang'] | Detection of courtesy amount block on bank checks | 459,345 |
Compressive sensing/sampling (CS) has been one of the most active research in signal and image processing since it was proposed. The importance of CS is that it provides a high performance sampling theory for sparse signals or signals with sparse representation. CS has shown outstanding performances in many applications. In this paper we discuss two potential applications of CS in radio astronomy: image deconvolution and Faraday rotation measure synthesis. Both theoretical analysis and experimental results show that CS will bring radio astronomy to a brand new stage. | ['Feng Li', 'T. J. Cornwell', 'Frank de Hoog'] | The applications of compressive sensing to radio astronomy | 198,461 |
In this paper, we propose a current waveform estimation algorithm for signal lines without the necessity of SPICE simulation. Unlike previous methods, we do not use function fitting or compute the effective capacitance. Instead, the proposed algorithm predicts the current waveform by using current responses of a driver for multiple fixed capacitances provided by the foundry. We demonstrate usefulness of the proposed method for evaluating electromigration reliability of signal lines. Experimental results indicate excellent accuracy and run times as compared to the golden results obtained from SPICE. | ['Zhong Guan', 'Malgorzata Marek-Sadowska'] | An efficient and accurate algorithm for computing RC current response with applications to EM reliability evaluation | 908,918 |
A general framework is constructed for efficiently and stably evaluating the Hadamard finite-part integrals by composite quadrature rules. Firstly, the integrands are assumed to have the Puiseux expansions at the endpoints with arbitrary algebraic and logarithmic singularities. Secondly, the Euler-Maclaurin expansion of a general composite quadrature rule is obtained directly by using the asymptotic expansions of the partial sums of the Hurwitz zeta function and the generalized Stieltjes constant, which shows that the standard numerical integration formula is not convergent for computing the Hadamard finite-part integrals. Thirdly, the standard quadrature formula is recast in two steps. In step one, the singular part of the integrand is integrated analytically and in step two, the regular integral of the remaining part is evaluated using the standard composite quadrature rule. In this stage, a threshold is introduced such that the function evaluations in the vicinity of the singularity are intentionally excluded, where the threshold is determined by analyzing the roundoff errors caused by the singular nature of the integrand. Fourthly, two practical algorithms are designed for evaluating the Hadamard finite-part integrals by applying the Gauss-Legendre and Gauss-Kronrod rules to the proposed framework. Practical error indicator and implementation involved in the Gauss-Legendre rule are addressed. Finally, some typical examples are provided to show that the algorithms can be used to effectively evaluate the Hadamard finite-part integrals over finite or infinite intervals. | ['Tongke Wang', 'Zhiyue Zhang', 'Zhifang Liu'] | The practical Gauss type rules for Hadamard finite-part integrals using Puiseux expansions | 904,155 |
The paper introduces and analyzes the asymptotic (large sample) performance of a family of blind feedforward nonlinear least-squares (NLS) estimators for joint estimation of carrier phase, frequency offset, and Doppler rate for burst-mode phase-shift keying transmissions. An optimal or "matched" nonlinear estimator that exhibits the smallest asymptotic variance within the family of envisaged blind NLS estimators is developed. The asymptotic variance of these estimators is established in closed-form expression and shown to approach the Cramer-Rao lower bound of an unmodulated carrier at medium and high signal-to-noise ratios (SNR). Monomial nonlinear estimators that do not depend on the SNR are also introduced and shown to perform similarly to the SNR-dependent matched nonlinear estimator. Computer simulations are presented to corroborate the theoretical performance analysis. | ['Yan Wang', 'Erchin Serpedin', 'Philippe Ciblat'] | Optimal blind carrier recovery for MPSK burst transmissions | 206,478 |
In the present work, we extend the results of the study of the structural stability of the Julia sets of noise-perturbed complex quadratic maps in the presence of dynamic and output noise both for the additive and the multiplicative cases. The critical values of the strength of the noise for which the Julia set of a family of noise-perturbed complex quadratic maps completely loses its original Julia structure were also calculated. Using graphical tools we demonstrate how one can localize the regions of the Julia sets that are affected by the presence of noise in each case. Finally, two numerical invariants for the Julia set of noise-perturbed complex quadratic maps are proposed for the study of the noise effect. | ['Ioannis Andreadis', 'Theodoros E. Karakasidis'] | ON A CLOSENESS OF THE JULIA SETS OF NOISE-PERTURBED COMPLEX QUADRATIC MAPS | 390,498 |
With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-the-art. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. | ['David E. Carlson', 'Vinayak Rao', 'Joshua T. Vogelstein', 'Lawrence Carin'] | Real-Time Inference for a Gamma Process Model of Neural Spiking | 159,885 |
Motivation: One of the major bottlenecks with ab initio protein folding is an effective conformation sampling algorithm that can generate native-like conformations quickly. The popular fragment assembly method generates conformations by restricting the local conformations of a protein to short structural fragments in the PDB. This method may limit conformations to a subspace to which the native fold does not belong because (i) a protein with really new fold may contain some structural fragments not in the PDB and (ii) the discrete nature of fragments may prevent them from building a native-like fold. Previously we have developed a conditional random fields (CRF) method for fragment-free protein folding that can sample conformations in a continuous space and demonstrated that this CRF method compares favorably to the popular fragment assembly method. However, the CRF method is still limited by its capability of generating conformations compatible with a sequence.#R##N##R##N#Results: We present a new fragment-free approach to protein folding using a recently invented probabilistic graphical model conditional neural fields (CNF). This new CNF method is much more powerful than CRF in modeling the sophisticated protein sequence-structure relationship and thus, enables us to generate native-like conformations more easily. We show that when coupled with a simple energy function and replica exchange Monte Carlo simulation, our CNF method can generate decoys much better than CRF on a variety of test proteins including the CASP8 free-modeling targets. In particular, our CNF method can predict a correct fold for T0496_D1, one of the two CASP8 targets with truly new fold. Our predicted model for T0496 is significantly better than all the CASP8 models.#R##N##R##N#Contact: jinboxu@gmail.com | ['Feng Zhao', 'Jian Peng', 'Jinbo Xu'] | Fragment-free approach to protein folding using conditional neural fields | 82,360 |
We investigate the problem and provide a data model storing, indexing, and retrieving future locations of moving objects in an efficient manner. Each moving object has four independent variables which allow us to predict its future location: a starting location, a destination, a starting time, and an initial velocity. To understand the underlying complexity of the problem, we investigate and categorize the configurations where two variables can vary. Based on that understanding, we choose a configuration which is to some extent restrictive, but still can be used in a wide variety of realistic settings. A performance study shows that our model has much less overhead in processing range queries compared to other proposed approaches. | ['Hae Don Chon', 'Divyakant Agrawal', 'Amr El Abbadi'] | Storage and retrieval of moving objects | 869,161 |
Local-oscillator (LO) pulling is a typical issue in fully integrated transceivers. To offset the oscillator frequency from the PA output frequency, SSB mixing or division-by-2 is typically used [1]. However, the first might require additional filtering to remove mixing spurs and the latter is still sensitive to second-harmonic pulling. The divider described in this paper prevents LO pulling by introducing a fractional ratio between input and output frequencies. Since fractional spurs are suppressed by digital calibration, no additional filtering is required, removing inductors and saving silicon area. | ['Stefano Pellerano', 'Paolo Madoglio', 'Yorgos Palaskas'] | A 4.75GHz fractional frequency divider with digital spur calibration in 45nm CMOS | 474,498 |
Gaussian distribution has for several decades been ubiquitous in the theory and practice of statistical classification. Despite the early proposals motivating the use of predictive inference to design a classifier, this approach has gained relatively little attention apart from certain specific applications, such as speech recognition where its optimality has been widely acknowledged. Here we examine statistical properties of different inductive classification rules under a generic Gaussian model and demonstrate the optimality of considering simultaneous classification of multiple samples under an attractive loss function. It is shown that the simpler independent classification of samples leads asymptotically to the same optimal rule as the simultaneous classifier when the amount of training data increases, if the dimensionality of the feature space is bounded in an appropriate manner. Numerical investigations suggest that the simultaneous predictive classifier can lead to higher classification accuracy than the independent rule in the low-dimensional case, whereas the simultaneous approach suffers more from noise when the dimensionality increases. | ['Yaqiong Cui', 'Jukka Sirén', 'Timo Koski', 'Jukka Corander'] | Simultaneous Predictive Gaussian Classifiers | 650,922 |
Design and development of fault diagnosis schemes (FDS) for electric power distribution systems are major steps in realizing the self-healing function of a smart distribution grid. The application of the FDS in the electric power distribution systems is mainly aimed at precise detecting and locating of the deteriorated components, thereby enhancing the quality and reliability of the electric power delivered to the customers. The impacts of two types of the FDS on distribution system reliability are compared and presented in this paper. The first type is a representative of the FDS which diagnoses the deteriorated components after their failing. However, the second type is a representative of the FDS which can diagnose the failing components prior to a complete breakdown and while still in the incipient failure condition. To provide quantitative measures of the reliability impacts of these FDS, the comparative and sensitivity case studies are conducted on a typical Finnish urban distribution network. | ['Shahram Kazemi', 'Matti Lehtonen', 'Mahmud Fotuhi-Firuzabad'] | Impacts of Fault Diagnosis Schemes on Distribution System Reliability | 451,629 |
A new method (NI-DACG) for the partial eigensolution of large sparse symmetric FE eigen problems is presented. NI-DACG relies on the optimization of Rayleigh quotients in successively deflated subspaces by a preconditioned conjugate gradient technique and uses a multiple grid type approach to assess an improved eigenvector estimate on nested FE grids on which the solution to the continuous eigenproblem is sought. NIDACG is implemented on the CRAY Y-MP supercomputer making use of vectorization and/or parallelization with two and four processors. Results relative to the calculation of the 50 smallest eigenpairs for two representative sample problems show a gain in CPU time that exceeds one order of magnitude with respect to the scalar implementation of NI-DACG and emphasize the promising features of this technique for the partial eigenanalysis on supercomputers. | ['Giorgio Pini', 'Giuseppe Gambolati'] | Parallel eigenanalysis for nested grids | 468,505 |
Effective drawing of proportional symbol maps using GRASP. | ['Rafael G. Cano', 'Guilherme Kunigami', 'Cid C. de Souza', 'Pedro Jussieu de Rezende'] | Effective drawing of proportional symbol maps using GRASP. | 791,058 |
The CPU module is composed of networks including a lot of multiprocessors, and the parallel processing is done between such processors. The most important elements in a VLSI multiprocessor network are the component of networks. The key in the network is to execute the communication between a lot of nodes faultlessly while securing the scalability. In this paper, we introduce a parallel architecture with short path communication, featuring a random connection in Small World Network, and present network architecture without global clock using communicative process by CSP synchronization method. | ['H. Mori', 'Minoru Uehara'] | Small-World Architecture for Parallel Processors | 732,091 |
WordNet is a lexicon widely known and used as an ontological resource hosting comparatively large collection of semantically interconnected words. Use of such resources produces meaningful results and improves users' search experience through the increased precision and recall. This paper presents our facet-enabled WordNet powered semantic search work done in the context of the bioenergy domain. The main hurdle to achieving the expected result was sense disambiguation further complicated by the occasional fine-grained distinction of meanings of the terms in WordNet. To overcome this issue, a novel sense disambiguation approach based on automatically built domain specific ontologies, WordNet synset hierarchy and term (or word) sense ranks is proposed. | ['Feroz Farazi', 'Craig Chapman', 'Pathmeswaran Raju', 'Lynsey Melville'] | WordNet Powered Faceted Semantic Search with Automatic Sense Disambiguation for Bioenergy Domain | 692,782 |
Although the term "Big Data" is often used to refer to large datasets generated by science and engineering or business analytics efforts, increasingly it is used to refer to social networking websites and the enormous quantities of personal information, posts, and networking activities contained therein. The quantity and sensitive nature of this information constitutes both a fascinating means of inferring sociological parameters and a grave risk for security of privacy. The present study aimed to find evidence in the literature that malware has already adapted, to a significant degree, to this specific form of Big Data. Evidence of the potential for abuse of personal information was found: predictive models for personal traits of Facebook users are alarmingly effective with only a minimal depth of information, "Likes", It is likely that more complex forms of information (e.g. posts, photos, connections, statuses) could lead to an unprecedented level of intrusiveness and familiarity with sensitive personal information. Support for the view that this potential for abuse of private information is being exploited was found in research describing the rapid adaptation of malware to social networking sites, for the purposes of social engineering and involuntary surrendering of personal information. Social media networks can be used to predict the personality of an individual.Social media networks are very vulnerable to privacy intrusion.Social networking sites are vulnerable to malware risks.Social networking sites have large amounts of big data. | ['Romany F. Mansour'] | Understanding how big data leads to social networking vulnerability | 581,506 |
In this paper, we consider single- and multi-user Gaussian channels with feedback under expected power constraints and with non-vanishing error probabilities. In the first of two contributions, we study asymptotic expansions for the additive white Gaussian noise (AWGN) channel with feedback under the average error probability formalism. By drawing ideas from Gallager and Nakiboglu’s work for the direct part and the meta-converse for the converse part, we establish the $ \varepsilon $ -capacity and show that it depends on $ \varepsilon $ in general and so the strong converse fails to hold. Furthermore, we provide bounds on the second-order term in the asymptotic expansion. We show that for any positive integer $L$ , the second-order term is bounded between a term proportional to $-\ln _{(L)} n$ (where $\ln _{(L)}(\cdot )$ is the $L$ -fold nested logarithm function) and a term proportional to $+(n\ln n)^{1/2}$ , where $n$ is the blocklength. The lower bound on the second-order term shows that feedback does provide an improvement in the maximal achievable rate over the case where no feedback is available. In our second contribution, we establish the $ \varepsilon $ -capacity region for the AWGN multiple access channel with feedback under the expected power constraint by combining ideas from hypothesis testing, information spectrum analysis, Ozarow’s coding scheme, and power control. | ['Lan V. Truong', 'Silas L. Fong', 'Vincent Y. F. Tan'] | On Gaussian Channels With Feedback Under Expected Power Constraints and With Non-Vanishing Error Probabilities | 583,552 |
We report a new method for designing (M,d, k) constrained codes for use in multi-level optical recording channels. The method allow us to design practical codes, which have simple encoder tables and decoders having fixed window length. The codes presented here for the d = 1 and d = 2 cases, achieve higher storage densities than previously reported codes, and come within 0.3 - 0.7% of capacity. | ['Ashwin Kumar', 'Kees A. Schouhamer Immink'] | Design of close-to-capacity constrained codes for multi-level optical recording | 388,689 |
This paper proposes a carrier phase recovery approach when turbo codes are used. The phase estimation is implemented with the aid of the extrinsic information from the turbo decoder. A series of look-up tables are pre-computed to reduce the computation complexity, and thereby we avoid introducing delay to the decoding. Simulations are carried out both in BPSK and QPSK systems. If the block size is 1024, a phase error up to 82/spl deg/ in a BPSK system, and up to 37/spl deg/ in a QPSK system can be removed completely. Compared with the conventional method, which has separate phase recovery and decoding, this approach exhibits a great improvement. The effect of block size is also considered. The results demonstrate that the longer the block size, the better the performance. | ['Li X. Zhang', 'Alister G. Burr'] | Phase estimation with the aid of soft output from turbo decoding | 401,992 |
The existence of redundancy is a serious problem in virtual enterprise in which a number of collaborating enterprises join together to manufacture and sell a class of product for a time-limited period. This paper proposes a new approach for detection and elimination of redundancy in virtual enterprises; the proposed approach is based on workflow and uses Petri net for modeling and simulation of workflows. This paper also presents a working example as a proof of concept. | ['Reggie Davidrajuh'] | Workflow Based Approach for Eliminating Redundancy in Virtual Enterprising | 61,284 |
Section Editors' Introduction: WHAT IS NEXT FOR MOBILE SENSING? | ['Robin Kravets', 'Nic Lane'] | Section Editors' Introduction: WHAT IS NEXT FOR MOBILE SENSING? | 692,326 |
With the increasing adoption of Model-Based Development in many domains (e.g., Automotive Software Engineering, Business Process Engineering), models are starting to become core artifacts of modern software engineering processes. By raising the level of abstraction and using concepts closer to the problem and application domain rather than the solution and technical domain, models become core assets and reusable intellectual property, being worth the effort of maintaining and evolving them. Therefore, increasingly models experience the same issues as traditional software artifacts, i.e., being subject to many kinds of changes, which range from rapidly evolving platforms to the evolution of the functionalities provided by the applications developed. These modifications include changes at all levels, from requirements through architecture and design, to executable models, documentation and test suites. They typically affect various kinds of models including data models, behavioral models, domain models, source code models, goal models, etc. Coping with and managing the changes that accompany the evolution of software assets is therefore an essential aspect of Software Engineering as a discipline. | ['Dirk Deridder'] | Summary of the second international workshop on models and evolution | 573,449 |
Aminoacridines have a long history in the drug and dye industries and display a wide range of biological and physical properties. Despite the historical relevance of 9-aminoacridines, there have been few studies investigating their stability. 9-Aminoacridines are known to hydrolyze at the C 9 -N 15 bond, yielding acridones. In this study, the pH-dependent hydrolysis rates of a series of 9-substituted aminoacridines are investigated. In addition, ground-state physical properties of the compounds are determined using ab initio quantum mechanics calculations to gain insight into the forces that drive hydrolysis. An analysis of the bond orders, bond dissociation energies, and conformational energies show that the rate of hydrolysis depends on two main factors: delocalization across the C 9 -N 15 bond and steric effects. The computational results are applied to explain the change in experimental rates of hydrolysis going from primary to secondary and to tertiary substituted 9-aminoacridines. In the case of tertiary substituted amines, the calculations indicate the C 9 -N 15 bond is forced into a more gauche-like conformation, greatly diminishing delocalization (as shown by reductions in bond orders and bond energy), which leads to rapid hydrolysis. A model of intramolecular hydrogen bonding is also presented, which explains the increased rate of hydrolysis observed for highly substituted compounds under acidic conditions. | ['John R. Goodell', 'Bengt Svensson', 'David M. Ferguson'] | Spectrophotometric determination and computational evaluation of the rates of hydrolysis of 9-amino-substituted acridines | 313,978 |
Industry is defining a new generation of mobile wireless technologies, called in cellular terminology "fourth generation" or "4G." This article shows that a system combining extensions of two radio access technologies, IEEE 802.11 and IEEE 802.16, meets the ITU-R's "IMT-Advanced" or 4G requirements. The extensions are 802.16 m (100 Mb/s, 250 km/h) and 802.11VHT (1 Gb/s, low velocity). The focus of this article is to show how IEEE 802.21 (the emerging IEEE standard for media-independent handover services) supports ";seamless"; mobility between these two radio access technologies. This mobility integrates the two radio access technologies into one system. We conclude that an 802.11VHT + 802.16 m + 802.21 system is likely to be proposed to the ITU-R for IMT- Advanced 4G. | ['Les Eastwood', 'Scott Migaldi', 'Qiaobing Xie', 'Vivek Gupta'] | Mobility using IEEE 802.21 in a heterogeneous IEEE 802.16/802.11-based, IMT-advanced (4G) network | 103,288 |
It is important to improve data reliability and data access efficiency for data-intensive applications in a data grid environment. In this paper, we propose an Information Dispersal Algorithm (IDA)-based parallel storage scheme for massive data distribution and parallel access in the Scientific Data Grid. The scheme partitions a data file into unrecognizable blocks and distributes them across many target storage nodes according to user profile and system conditions. A subset of blocks, which can be downloaded in parallel to remote clients, is required to reconstruct the data file. This scheme can be deployed on the top of current grid middleware. A demonstration and experimental analysis show that the IDA-based parallel storage scheme has better data reliability and data access performance than the existing data replication methods. Furthermore, this scheme has the potential to reduce considerably storage requirements for large-scale databases on a data grid. | ['Weizhong Lu', 'Yuanchun Zhou', 'Lei Liu', 'Baoping Yan'] | An IDA-Based Parallel Storage Scheme in the Scientific Data Grid | 490,123 |
Gait velocity has been consistently shown to be an important indicator and predictor of health status, especially in older adults. Gait velocity is often assessed clinically, but the assessments occur infrequently and thus do not allow optimal detection of key health changes when they occur. In this paper, we show the time it takes a person to move between rooms in their home denoted 'transition times' can predict gait velocity when estimated from passive infrared motion detectors installed in a patient's own home. Using a support vector regression approach to model the relationship between transition times and gait velocities, we show that velocity can be predicted with an average error less than 2.5 cm/sec. This is demonstrated with data collected over a 5 year period from 74 older adults monitored in their own homes. This method is simple and cost effective, and has advantages over competing approaches such as: obtaining 20 to100x more gait velocity measurements per day, and offering the fusion of location specific information with time stamped gait estimates. These advantages allow stable estimates of gait parameters (maximum or average speed, variability) at shorter time scales than current approaches. This also provides a pervasive in home method for context aware gait velocity sensing that allows for monitoring of gait trajectories in space and time. | ['Rajib Rana', 'Daniel Austin', 'Peter G. Jacob', 'Mohanraj Karunanithi', 'Jeffrey Kaye'] | Continuous Gait Velocity Estimation using Houseohld Motion Detectors | 245,658 |
As the development of social media, the services in social media have significantly changed people's habits of using Internet. How- ever, as the large amount of information posted by users and the highly frequent updates in social media, users often face the problem of informa- tion overload and miss out of content that they may be interested in. Rec- ommender systems, which recommends an item (e.g., a product, a service and a twitter etc.) to users based on their interests, is an effective tech- nique to handle this issue. In this paper, we borrow matrix factorization model from recommender system to predict users' behaviors of retweet- ing in social media. Compared with previous works, we take the relevance of users' interests, tweets' content, and publishers' influence into account simultaneously. Our experimental results on a real-world dataset show that the proposed model achieves desirable performance in characterizing users' retweeting behaviors and predicting topic diffusion in social media. | ['Jun Li', 'Jiamin Qin', 'Tao Wang', 'Yi Cai', 'Huaqing Min'] | A Collaborative Filtering Model for Personalized Retweeting Prediction | 554,821 |
With the proliferation of cloud computing concept, the datacenters, as the basic infrastructure for cloud computing, have gained an ever-growing attention during the last decade. Energy consumption in datacenters is one of the several features of them that have been the target of various researches. Two major consumers of energy in datacenters are the cooling system and IT equipment. Computing resources, such as servers, and communicating ones, such as switches, constitute the main portion of IT equipment. Among these two major players, the servers have been considered more than networking equipment. Making servers energy proportional as well as server consolidation are the two essential approaches regarding reduction of servers’ energy consumption. However, some researches indicate that 10–20% of energy consumption of IT equipment goes to network equipment and hence they must also be considered en route to better energy consumption in datacenters. The focus of this chapter is energy consumption of network equipment in datacenters and conducted researches in this area. First, a quick summary about network energy consumption in datacenters is presented. After that, related state of the art approaches and techniques are categorized, reviewed, and discussed. Finally, the chapter is concluded with presentation of recent original work of authors and its details. | ['Seyed Morteza Nabavinejad', 'Maziar Goudarzi'] | Chapter Five – Communication-Awareness for Energy-Efficiency in Datacenters | 683,192 |
Variable selection is an important research topic in modern statistics, traditional variable selection methods can only select the mean model and (or) the variance model, and cannot be used to select the joint mean, variance and skewness models. In this paper, the authors propose the joint location, scale and skewness models when the data set under consideration involves asymmetric outcomes, and consider the problem of variable selection for our proposed models. Based on an efficient unified penalized likelihood method, the consistency and the oracle property of the penalized estimators are established. The authors develop the variable selection procedure for the proposed joint models, which can efficiently simultaneously estimate and select important variables in location model, scale model and skewness model. Simulation studies and body mass index data analysis are presented to illustrate the proposed methods. | ['Hui-Qiong Li', 'Liucang Wu', 'Ting Ma'] | Variable selection in joint location, scale and skewness models of the skew-normal distribution | 955,493 |
The Consensus Time Synchronization (CTS) overcomes the shortcoming of centralized time synchronization in terms of scalability and robustness to node failure. However, CTS leads to slow convergence rate, high communication traffic and the inability to provide synchronization to an external time source. This paper proposes a novel distributed time synchronization protocol for WSNs, the Consensus-based Multi-hop Time Synchronization (CMTS) protocol. CMTS combines the benefits of consensus-based scheme, multi-level topology, synchronization by overhearing, master node synchronization, and MAC-layer timestamping. Simulations are performed to validate the effectiveness of CMTS. The results show that CMTS achieves high accuracy and improves the convergence time compared to competing schemes in the literature. | ['Amin Saiah', 'Chafika Benzaid', 'Nadjib Badache'] | CMTS: Consensus-based Multi-hop Time Synchronization protocol in wireless sensor networks | 967,255 |
Simple and efficient location algorithms are of great research significance to Wireless Sensor Networks (WSNs) systems. In this paper, main factors of position error in the Centroid location algorithm are analysed, and then a Furthermost Beacon (FB) location algorithm is presented. In the range of the communication radius of the nodes to be located, the algorithm estimates four beacon nodes or more with highest distribution degree around the unknown nodes. A group of experiments have been conducted and the results of the experiments have demonstrated the superior performance of the proposed algorithm. | ['Wei Chen', 'Qian Wang', 'Xin Wang', 'Qi Chong Tian'] | A Centroid location algorithm based on Furthermost Beacon with application to wireless sensor networks | 482,165 |
Video segmentation has been an important and challenging issue for many video applications. Usually there are two different video segmentation approaches, i.e., shot-based segmentation that uses a set of key-frames to represent a video shot and object-based segmentation that partitions a video shot into objects and background. Representing a video shot at different semantic levels, two segmentation processes are usually implemented separately or independently for video analysis. In this paper, we propose a new approach to combine two video segmentation techniques together. Specifically, a combined key-frame extraction and object-based segmentation method is developed based state-of-the-art video segmentation algorithms and statistical clustering approaches. On the one hand, shot-based segmentation can dramatically facilitate and enhance object-based segmentation by using key-frame extraction to select a few key-frames for statistical model training. On the other hand, object-based segmentation can be used to improve shot-based segmentation results by using model-based key-frame refinement. The proposed approach is able to integrate advantages of these two segmentation methods and provide a new combined shot-based and object-based framework for a variety of advanced video analysis tasks. Experimental results validate effectiveness and flexibility of the proposed video segmentation algorithm. | ['Lijie Liu', 'Guoliang Fan'] | Combined key-frame extraction and object-based video segmentation | 365,767 |
In this paper, we evaluate the performance of a dynamic approach to classifying flow patterns reconstructed by a switching-mode macroscopic flow model considering a multivariate clustering method. To remove noise and tolerate a wide scatter of traffic data, filters are applied before the overall modeling process. Filtered data are dynamically and simultaneously input to the density estimation and traffic flow modeling processes. A modified cell transmission model simulates traffic flow to explicitly account for flow condition transitions considering wave propagations throughout a freeway test stretch. We use flow dynamics specific to each of the cells to determine the mode of prevailing traffic conditions. Flow dynamics are then reconstructed by neural methods. By using two methods in part, i.e., dynamic classification and nonhierarchical clustering, classification of flow patterns over the fundamental diagram is obtained by considering traffic density as a pattern indicator. The fundamental diagram of speed-flow is dynamically updated to specify the current corresponding flow pattern. The dynamic classification approach returned promising results in capturing sudden changes on test stretch flow patterns as well as performance compared to multivariate clustering. The dynamic methods applied here are open to use in practice within intelligent management strategies, including incident detection and control and variable speed management. | ['Hilmi Berk Celikoglu', 'Mehmet Ali Silgu'] | Extension of Traffic Flow Pattern Dynamic Classification by a Macroscopic Model Using Multivariate Clustering | 638,770 |
In this paper proposed successive approximation register (SAR) analog-to-digital converter (ADC) implemented for M-PAM receiver and computational intelligence application is presented. By applying Vcm-based switching method that reduces switching power of the DAC, the proposed SAR ADC uses less capacitor in the DAC array. Also, asynchronous control logic is used which an external high frequency doesn't need clock to drive ADC. This design provide on the automatic gain control (AGC) scheme for pulse amplitude modulation (PAM) with analog-to-digital converters (ADCs). | ['Wen Cheng Lai'] | Design Successive Approximation Register Analog-to-Digital Converter with Vcm-Based Method for M-PAM Receiver and Computational Intelligence Application | 608,803 |
In a cognitive radio network (CRN), secondary users (SUs) opportunistically utilize idle licensed spectrum bands. We address the natural questions that arise when the incumbents or primary users (PUs) return to the channel the SUs are using opportunistically. Instead of immediately switching to another idle channel as proposed in almost all existing approaches, the SUs may opt to wait silently in their current channel until the PUs depart. This option would be beneficial to the SUs if the returned PUs stay at the channel only for a short period of time and the SUs' channel-switching incurs a non-negligible overhead. We determine how long the SUs should wait in their current channel before switching to a new idle channel. The SUs should also occasionally sense those (called em out-of-band) channels currently not in use for sensing the availability of spectrum opportunities. We propose an efficient, adaptive spectrum-sensing technique to detect when a busy out-of-band channel becomes idle. We also present a spectrum-management architecture that integrates the SUs' strategies and facilitates fast discovery of spectrum opportunities. | ['Caoxie Zhang', 'Kang G. Shin'] | What Should Secondary Users Do Upon Incumbents' Return? | 292,344 |
In this paper, we present an automatic understanding system for Chinese name cards. After preprocessing of an input card image, we grouped characters into item blocks and segmented characters according to their aspect ratios and gap widths. After character extraction, we sent characters to a statistical multi-font character recognizer. We identified items according to their geometric characteristics and embedded key-characters. A user can edit segmentation results interactively and then save the results in the database for further applications. The high performance in the experiments shows the effectiveness of the proposed system. | ['Hsi-Jian Lee', 'Shan-Hung Lee'] | Design of a Chinese name card understanding system | 351,993 |