abstract
stringlengths 5
11.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 4
367
| __index_level_0__
int64 0
1,000k
|
---|---|---|---|
When constructing a large-scale database system, data replication is useful to give users efficient access to that data. However for databases with a very large number of replica sites, using traditional networking concurrency control protocols makes communication too costly for practical use. In this paper we propose a new protocol called ECHO for concurrency control of replica data distributed on a large scale. Using broadcasting, such as satellite broadcasting, terrestrial broadcasting, and cable television, the ECHO method can keep communication costs constant, regardless of the number of sites. The ECHO method also has a backup mechanism for failed broadcast reception using the communication network. | ['Yukari Shirota', 'Atsushi Iizawa', 'Hiroko Mano', 'Takashi Yano'] | The ECHO method: concurrency control method for a large-scale distributed database | 120,134 |
Although RDF graph data often come with an associated schema, recent studies have proven that real RDF data rarely conform to their perceived schemas. Since a number of data management decisions, including storage layouts, indexing, and efficient query processing, use schemas to guide the decision making, it is imperative to have an accurate description of the structuredness of the data at hand (how well the data conform to the schema).#R##N##R##N#In this paper, we have approached the study of the structuredness of an RDF graph in a principled way: we propose a framework for specifying structuredness functions, which gauge the degree to which an RDF graph conforms to a schema. In particular, we first define a formal language for specifying structuredness functions with expressions we call rules. This language allows a user to state a rule to which an RDF graph may fully or partially conform. Then we consider the issue of discovering a refinement of a sort (type) by partitioning the dataset into subsets whose structuredness is over a specified threshold. In particular, we prove that the natural decision problem associated to this refinement problem is NP-complete, and we provide a natural translation of this problem into Integer Linear Programming (ILP). Finally, we test this ILP solution with three real world datasets and three different and intuitive rules, which gauge the structuredness in different ways. We show that the rules give meaningful refinements of the datasets, showing that our language can be a powerful tool for understanding the structure of RDF data, and we show that the ILP solution is practical for a large fraction of existing data. | ['Marcelo Arenas', 'Gonzalo I. Diaz', 'Achille Fokoue', 'Anastasios Kementsietsidis', 'Kavitha Srinivas'] | A principled approach to bridging the gap between graph data and their schemas | 5,269 |
We introduce a logical verification methodology for checking behavioral properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed for expressing in an effective way distinctive aspects of services, such as, acceptance of a request, provision of a response, correlation among service requests and responses, etc. Our approach allows service properties to be expressed in such a way that they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulas over service specifications. We demonstrate the feasibility and effectiveness of our methodology by means of the specification and analysis of a case study in the automotive domain. | ['Alessandro Fantechi', 'Stefania Gnesi', 'Alessandro Lapadula', 'Franco Mazzanti', 'Rosario Pugliese', 'Francesco Tiezzi'] | A logical verification methodology for service-oriented computing | 248,603 |
High quality and affordable support for chronic health conditions is simultaneously one of the great challenges and opportunities for the application of ubiquitous computing technologies. Mobile networks of wearable connected devices and sensors have the potential to offer data driven personalized support and contextually aware real-time advice that could help people in their everyday lives. My initial research has suggested three key areas for the implementation of such systems: reducing workload, automating extraction of meaningful information, and the communication of insights in an intuitive, timely and emotionally sensitive manner. While there are many technical issues involved with realization, the human factors related to user interaction with personal data will be critical in building systems that motivate users to choose beneficial lifestyle choices, and will therefore be a major focus of my research. | ['Dmitri Katz'] | Investigating the viability of automated, intuitive, and contextual insights for chronic disease self-management using ubiquitous computing technologies | 886,267 |
We consider the problem of inferring a genetic network from noisy data. This is done under the Temporal Boolean Network Model. Owing to the hardness of the problem, we propose an heuristic approach based on the combined utilization of evolutionary algorithms and other existing algorithms. The main features of this approach are the heuristic seeding of the initial population, the utilization of a specialized recombination operator, and the use of a majority-voting procedure in order to build a consensus solution. Experimental results provide support for the potential usefulness of this approach. | ['Carlos Cotta', 'José M. Troya'] | Reverse engineering of temporal Boolean networks from noisy data using evolutionary algorithms | 426,541 |
A four degrees of freedom (DoF) waist and trunk mechanism, as well as human-like foot, enable the humanoid robot WABIAN-2R to perform human-like walk with stretched knees, and heel-contact and toe-off gait phases. The inverse kinematics (IK) method, used in the present system, requires specification of not only task space reference trajectories, but also reference trajectories for all redundant DoFs. In this paper, we propose a novel, unified inverse kinematics method significantly simplifying the pattern generation. The method enables generation of the above described gait by specifying only the task space trajectories. We divide the forward locomotion task into subtasks with different priorities and combine them in the single IK equation. We also perform experiments in simulation environment as well as on WABIAN-2R, which prove that the method can be used to calculate IK for human-like gait. The equation evaluated in this paper is applied to the forward locomotion task, however it can be easily modified to perform other tasks on humanoid robots with different kinematic structures. | ['Przemyslaw Kryczka', 'Kenji Hashimoto', 'Hideki Kondo', 'Aiman Musa M Omer', 'Hun-ok Lim', 'Atsuo Takanishi'] | Stretched knee walking with novel inverse kinematics for humanoid robots | 438,873 |
The Earth is a water planet, two-thirds of which is covered by water. With the rapid developments in technology, underwater communications has become a fast growing field, with broad applications in commercial and military water based systems. The need for underwater wireless communications exists in applications such as remote control in the off-shore oil industry, pollution monitoring in environmental systems, collection of scientific data from ocean-bottom stations, disaster detection and early warning, national security and defense (intrusion detection and underwater surveillance), as well as new resource discovery. Thus, the research of new underwater wireless communication techniques has played the most important role in the exploration of oceans and other aquatic environments. In contrast with terrestrial wireless radio communications, the communication channels in underwater wireless networks can be seriously affected by the marine environment, by noise, and by limited bandwidth and power resources, and by the harsh underwater ambient conditions. Hence, the underwater communication channel often exhibits severe attenuation, multipath effect, frequency dispersion, and constrained bandwidth and power resources, etc., which turn the underwater communication channel into one of the most complex and harsh wireless channels in nature. When facing these unique conditions in diverse underwater applications, many new challenges, which were not encountered in terrestrial wireless communications, are emerging in underwater acoustic, optical, and RF communications for future underwater wireless networks. Of these challenges, acoustic and optical are the most compelling, and somewhat complementary, owing to the potential for longer range and high bandwidth networked communications in sizeand power-constrained modems and unmanned systems. | ['Xi Zhang', 'Jun-Hong Cui', 'Santanu Das', 'Mario Gerla', 'Mandar Chitre'] | Underwater wireless communications and networks: theory and application: Part 1 [Guest Editorial] | 142,950 |
ABSTRACTClimate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workloa... | ['Zhenlong Li', 'Fei Hu', 'John L. Schnase', 'Daniel Q. Duffy', 'Tsengdar Lee', 'Michael K. Bowen', 'Chaowei Yang'] | A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce | 690,058 |
A necessary condition for the widely used additive value function is total preferential independence, or somewhat equivalently, total substitutability among the decision criteria. We consider cases where total substitutability is absent, and study the value functions that are applicable to such cases. First we take the case of total nonsubstitutability, and prove that the maximin value function is appropriate for it. This result easily extends to the closely related maximax value function. Next we consider the case where there is neither total substitutability nor total nonsubstitutability, and show how a minsum value function can be applicable. A minsum function is one that uses only addition and minimum extraction operations. We explain how the structure of a minsum function can be inferred from substitutability information. In the process, we encounter certain subsets of criteria which we call chains and cuts. | ['Jayavel Sounderpandian'] | Value Functions When Decision Criteria Are Not Totally Substitutable | 3,939 |
The presence of missing entries in DNA microarray gene expression datasets creates severe problems in downstream analysis because they require complete datasets. Though several missing value prediction methods have been proposed to solve this problem, they have limitations which may affect the performance of various analysis algorithms. In this regard, a novel distance based iterative sequential K-nearest neighbour imputation method (ISKNNimpute) has been proposed. The proposed distance is a hybridisation of modified Euclidean distance and Pearson correlation coefficient. The proposed method is a modification of KNN estimation in which the concept of reuse of estimation is considered using both iterative and sequential approach. The performance of the proposed ISKNNimpute method is tested on various time-series and non time-series microarray datasets comparing with several widely used existing imputation techniques. The experimental results confirm that the ISKNNimpute method consistently generates better... | ['Chandra Das', 'Shilpi Bose', 'Matangini Chattopadhyay', 'Samiran Chattopadhyay'] | A novel distance-based iterative sequential KNN algorithm for estimation of missing values in microarray gene expression data | 949,389 |
The interference to the primary receiver (PR) is a critical issue in the resource allocation of cognitive radio (CR) networks. For instance, the nonlinearity of the power amplifier (PA) causes nonlinear interference to the PRs. This paper studies the power allocation in cognitive radio networks by considering the nonlinear effects of the PA on the received signal-to-noise ratio (SNR) at the secondary receiver (SR) and the adjacent channel interference (ACI) to the PRs. A nonlinear PA with limited dynamic range and a lower limit on the transmit power is assumed for the secondary transmitter (ST). To control the resulting ACI from the ST to the PRs, the PA needs to be turned off in some fading blocks. To investigate the throughput, an analytical expression for the probability of data transmission between the secondary users is derived as a function of the interference temperature limits of the PRs. All analyses are performed for both peak and average ACI power constraints. Through theoretical analysis and simulation studies, maximum achievable average SNR at the SR is investigated. Moreover, the throughput degradation is studied and it is observed that the average ACI power constraints result in the better performance than the peak ones. | ['Mahdi Majidi', 'Abbas Mohammadi', 'Abdolali Abdipour'] | Analysis of the Power Amplifier Nonlinearity on the Power Allocation in Cognitive Radio Networks | 444,138 |
We describe the design principles and functionality of a visual query language called See QL that represents data retrieval and analysis operations as a data-flow graph. A query is viewed as a sequence of relational algebra and other data transformation operations applied to database tables. The language is well-suited for large-scale scientific database applications, where data analysis is a major component and the typical queries or data retrieval patterns are unrestricted. The language provides a flexible yet easy-to-use environment for database access and data analysis for non-programmer research scientists. We have implemented this language in a system being used in a long-term data-intensive highway pavement research project (MnRoad) conducted by the Minnesota Department of Transportation. > | ['Bosco S. Tjan', 'Len Breslow', 'Sait Dogru', 'Vijay Rajan', 'Keith Rieck', 'James R. Slagle', 'Marius O. Poliac'] | A data-flow graphical user interface for querying a scientific database | 282,232 |
Pervasive self-care solutions in telecardiology. Typical use cases from the EPI-MEDICS project. | ['F. Gouaux', 'Chautemps Ls', 'J. Fayn', 'Stefano Adami', 'M. Arzi', 'Deodato Assanelli', 'M.C. Forlini', 'C. Malossi', 'Alvaro Martinez', 'Magnus C. Ohlsson', 'J. Placide', 'G.L. Ziliani', 'Paul Rubel'] | Pervasive self-care solutions in telecardiology. Typical use cases from the EPI-MEDICS project. | 781,575 |
There are not many tools in the evolutionary computing field that allow researchers to implement, modify or compare different algorithms. Additionally, those tools usually lack flexibility, maintenance or some other characteristic, so researchers program their own solutions most of the time, reimplementing algorithms that have already been implemented hundreds of times. This paper introduces a new framework for evolutionary computation called JEAF (Java Evolutionary Algorithm Framework) that tries to offer a platform to facilitate the tasks of comparing, analyzing, modifying and implementing evolutionary algorithms, reusing components and programming as few as possible. JEAF also aims to be a tool for evolutionary algorithm users that employ these algorithms to solve other problems not related with evolutionary computation. In this sense, JEAF provides methods to distribute an evolutionary process and to plug external tools to perform the evaluation of candidate solutions. | ['Pilar Caamaño', 'Rafael Tedin', 'Alejandro Paz-Lopez', 'José Antonio Becerra'] | JEAF: A Java Evolutionary Algorithm Framework | 235,986 |
Michel Waisvisz's The Hands is one of the most famous and long-lasting research projects in the literature of digital music instruments. Consisting of a pair of data gloves and exhibited for the first time in 1984, The Hands is a pioneering work in digital devices for performing live music. It is a work that engaged Waisvisz for almost a quarter of a century and, in turn, has inspired many generations of music technologists and performers of live music. Despite being often cited in the relevant literature, however, the documentation concerning the sensor architecture, design, mapping strategies, and development of these data gloves is sparse. In this article, we aim to fill this gap by offering a detailed history behind the development of The Hands. The information contained in this article was retrieved and collated by searching the STEIM archive, interviewing close collaborators of Waisvisz, and browsing through the paper documentation found in his personal folders and office. | ['Giuseppe Dalla Torre', 'Kristina Andersen', 'Frank Baldé'] | The hands: The making of a digital musical instrument | 797,290 |
Inverse, or negative binomial, sampling is often used when the observation of interest occurs extremely infrequently. As this is the case in bit error rate (BER) simulations, especially in high signal-to-noise ratio cases, negative binomial sampling can be advantageously employed in a computationally economic fashion to compare bit error rates between different systems. When the results of two negative binomial sampling tests are compared, point estimates and interval estimates quantify the performance relationship between the results of the tests. This paper derives a new, optimal, logarithmically symmetric confidence interval estimator for the ratio of BER estimates derived from two negative binomial tests. In addition, a three-sided hypothesis test with a single significance level is derived to quantify the confidence of the relationship between the two systems. Low-BER approximations for the confidence interval and decision thresholds are derived based on the F-distribution. The approximation is shown to work with BERs as high as $10^{-2}$ . An example inspired by bit interleaved coded modulation shows how the technique can be used to reduce simulation time by an order of magnitude and facilitate straightforward interpretation and comparison between different systems. Negative binomial sampling is recommended for comparison experiments where BER is the key metric. | ['Brian A. Mazzeo', 'Michael Rice'] | Bit Error Rate Comparison Statistics and Hypothesis Tests for Inverse Sampling (Negative Binomial) Experiments | 719,500 |
In this article, we propose a PHY-caching scheme for 5G wireless networks to achieve spectral efficiency gain over conventional caching schemes, which do not induce physical layer cooperation. By properly caching some popular contents at the BSs, the proposed PHY-caching can opportunistically transform the topology of the RAN from an unfavorable topology (e.g., relay or interference topology) into a more favorable MIMO broadcast topology and enjoy spectral efficiency gain. Specifically, we first introduce a generic cache model and cache-assisted PHY transmission, and show that PHY-caching can significantly enhance the spectral efficiency of the wireless network by inducing dynamic side information at the BSs. Then we discuss the design challenges and solutions of PHY-caching. We introduce maximum-distance-separable-coded caching and online cache content placement design as a potential solution. As a case study, we analyze the performance trade-off between PHY-caching at the BS and CN-caching at the gateway under capacity-limited CN-fronthaul. We show that even though PHY-caching covers fewer users than CN-caching and results in a lower cache hit rate, it is still efficient to do PHY-caching at the BS. | ['Wei Han', 'An Liu', 'Vincent Kin Nang Lau'] | PHY-caching in 5G wireless networks: design and analysis | 871,668 |
On the Regularity of Lossy RSA - Improved Bounds and Applications to Padding-Based Encryption. | ['Adam D. Smith', 'Ye Zhang'] | On the Regularity of Lossy RSA - Improved Bounds and Applications to Padding-Based Encryption. | 783,391 |
Energy efficiency of computing devices has become a dominant area of research interest in recent years. Most previous work has focused on architectural techniques to improve power and energy efficiency; only a few consider saving energy at the algorithmic level. We prove that a region of perfect strong scaling in energy exists for matrix multiplication (classical and Strassen) and the direct n-body problem via the use of algorithms that use all available memory to replicate data. This means that we can increase the number of processors by some factor and decrease the runtime (both computation and communication) by the same factor, without changing the total energy use. | ['James Demmel', 'Andrew Gearhart', 'Benjamin Lipshitz', 'Oded Schwartz'] | Perfect Strong Scaling Using No Additional Energy | 117,621 |
Previous architectures of pervasive computing are customized for specific types of applications. In this paper, we propose a new architecture named iShadow, which facilitates the design and implementation of generic applications in pervasive computing environment. iShadow gracefully integrates physical spaces and human attention, and provides fundamental and flexible support to construct pervasive applications rapidly. Significant differences of iShadow from previous works are lightweight user-shadow model, scalable resource discovery and potent context inference mechanism. Our prototypes demonstrate that the iShadow architecture is robust, feasible and effective for pervasive applications. | ['Daqiang Zhang', 'Hu Guan', 'Jingyu Zhou', 'Feilong Tang', 'Minyi Guo'] | iShadow: Yet Another Pervasive Computing Environment | 304,346 |
BCJR algorithm is an exact and efficient algorithm to compute the marginal posterior distributions of state variables and pairs of consecutive state variables of a trellis structure. Due to its overwhelming complexity, reduced complexity variations, such as the M-BCJR algorithm, have been developed. In this paper, we propose improvements upon the conventional M-BCJR algorithm based on modified active state selection criteria. We propose selecting the active states based on estimates of the fixed-lag smoothed distributions of the state variables. We also present Gaussian approximation techniques for the low-complexity estimation of these fixed-lag smoothed distributions. The improved performance over the M-BCJR algorithm is shown via computer simulations. | ['Cheran M. Vithanage', 'Christophe Andrieu', 'Robert J. Piechocki'] | Novel Reduced-State BCJR Algorithms | 70,894 |
The visual concept of an object category is usually composed of a set of sub-categories corresponding to different sub-classes, perspectives, spatial configurations and etc. Existing detector training algorithms usually require extensive supervisory information to achieve a satisfactory performance for sub-categorization. In this paper, we propose a detector training algorithm which can automatically mine meaningful sub-categories utilizing only the image contents within the training bounding boxes. The number of sub-categories can also be determined automatically. The mined sub-categories are of medium size and could be further labeled for a variety of applications like sub-category detection, meta-data transferring and etc. Promising detection results are obtained on the challenging PASCAL VOC dataset. | ['Jifeng Dai', 'Jianjiang Feng', 'Jie Zhou'] | Mining sub-categories for object detection | 346,457 |
Machine vision algorithms and a supporting architecture that were integrated in a fully automated prototype system for disk head inspection are presented. Some specific methods are elaborated on, including the computation of the Hough transform and multicode masks in pipeline architectures, object segmentation in textured backgrounds, and matching of extracted defects with inspection specifications. Extensive experimental results are given. > | ['Jorge L. C. Sanz', 'Dragutin Petkovic'] | Machine vision algorithms for automated inspection thin-film disk heads | 20,333 |
Microshrinkages are known as probably the most difficult defects to avoid in high-precission foundry. Depending on the magnitude of this defect, the piece in which it appears must be rejected with the subsequent cost increment. Modelling this environment as a probabilistic constellation of interrelated variables allows Bayesian networks to infer causal relationships. In other words, they may guess the value of a variable (for instance, the presence or not of a defect). Against this background, we present here the first microshrinkage prediction system that, upon the basis of a Bayesian network, is able to foresee the apparition of this defect and to determine whether the piece is still acceptable or not. Further, after testing this system in a real foundry, we discuss the obtained results and present a risk-level-based production methodology that increases the rate of valid manufactured pieces. | ['Yoseba K. Penya', 'Pablo García Bringas', 'Argoitz Zabala'] | Efficient failure-free foundry production | 201,996 |
Retrieval of a complex multimedia event has long been regarded as a challenging task. Multimedia event recounting, other than event detection, focuses on providing comprehensible evidence which justifies a detection result. Recounting enables "video skimming", which not only enhances video exploration, but also makes human-in-the-loop possible for improving the detection result. Most existing systems treat event recounting as a disjoint post-processing step over the result of event detection. Unlike these systems, this doctoral research aims to provide an in-depth understanding of how recounting, i.e., evidence localization, helps in event detection in the first place. It can potentially benefit the overall design of an efficient event detection system with or without human-in-the-loop. More importantly, we propose a framework for detecting and recounting everyday events without any needs of training examples. The system only takes a text description of an event as input, then performs evidence localization, event detection and recounting in a large, unlabelled video corpus. The goal of the system is to take advantage of event recounting which eventually improves zero-example event detection. We present preliminary results and work in progress. | ['Yi-Jie Lu'] | Zero-Example Multimedia Event Detection and Recounting with Unsupervised Evidence Localization | 896,070 |
This paper analyzes the restrictions necessary to ensure that the interest rate policy rule used by the central bank does not introduce local real indeterminacy into the economy. It conducts the analysis in a Calvo-style sticky price model. A key innovation is to add investment spending to the analysis. In this environment, local real indeterminacy is much more likely. In particular, all forward-looking interest rate rules are subject to real indeterminacy. | ['Charles T. Carlstrom', 'Timothy S. Fuerst'] | Investment and Interest Rate Policy: A Discrete Time Analysis | 214,983 |
Sea surface simulations are needed to generate realistic glitter in Monte-Carlo testing of automatic target detection in electro-optical imagery of sea scenes. Glitter is determined by the sea surface's height and slope. At present sea surfaces are generally modelled as correlated random fields. Current algorithms for generating large realizations of random fields generally produce correlated Gaussian fields, but the available empirical statistics on sea surfaces show a non-Gaussian distribution of point slope values. The paper introduces a class of non-Gaussian random fields with specified correlation functions and point distributions of slopes generated by pointwise transformation of Gaussian fields. This definition allows generation of large scale simulations of such fields through simple pointwise transformation of simulations of the associated Gaussians. > | ['Garry N. Newsam', 'Michael Wegener'] | Generating non-Gaussian random fields for sea surface simulations | 228,166 |
Socionics is an interdisciplinary approach with the objective to use sociological knowledge about the structures, mechanisms and processes of social interaction and social communication as a source of inspiration for the development of multi-agent systems, both for the purposes of engineering applications and of social theory construction and social simulation. The approach has been spelled out from 1998 on within the Socionics priority program funded by the German National research foundation. This special issue of the JASSS presents research results from five interdisciplinary projects of the Socionics program. The introduction gives an overview over the basic ideas of the Socionics approach and summarizes the work of these projects. | ['Thomas Malsch', 'Ingo Schulz-Schaeffer'] | Socionics: Sociological Concepts for Social Systems of Artificial (and Human) Agents | 181,317 |
In this paper, we investigate the numerical error propagation and provide systematic backward stability analysis for some hierarchical rank structured matrix algorithms. We prove the backward stability of various important hierarchically semiseparable (HSS) methods, such as HSS matrix-vector multiplications, HSS ULV linear system solutions, HSS linear least squares solutions, HSS inversions, and some variations. Concrete backward error bounds are given, including a structured backward error for the solution in terms of the structured factors. The error propagation factors involve only low-degree powers of the maximum off-diagonal numerical rank and the logarithm of the matrix size. Thus, as compared with the corresponding standard dense matrix algorithms, the HSS algorithms not only are faster but also have much better stability. We also show that factorization-based HSS solutions are usually preferred, while inversion-based ones may suffer from numerical instability. The analysis builds a comprehensive f... | ['Yuanzhe Xi', 'Jianlin Xia'] | On the Stability of Some Hierarchical Rank Structured Matrix Algorithms | 894,331 |
Simultaneous multithreading (SMT) increases CPU utilization and application performance in many circumstances, but it can be detrimental when performance is limited by application scalability or when there is significant contention for CPU resources. This paper describes an SMT-selection metric that predicts the change in application performance when the SMT level and number of application threads are varied. This metric is obtained online through hardware performance counters with little overhead, and allows the application or operating system to dynamically choose the best SMT level. We have validated the SMT-selection metric using a variety of benchmarks that capture various application characteristics on two different processor architectures. Our results show that the SMT-selection metric is capable of predicting the best SMT level for a given workload in 90% of the cases. The paper also shows that such a metric can be used with a scheduler or application optimizer to help guide its optimization decisions. | ['Justin R. Funston', 'Kaoutar El Maghraoui', 'Joefon Jann', 'Pratap Pattnaik', 'Alexandra Fedorova'] | An SMT-Selection Metric to Improve Multithreaded Applications' Performance | 214,127 |
Summary: We present ProbMetab, an R package that promotes substantial improvement in automatic probabilistic liquid chromatography–mass spectrometry-based metabolome annotation. The inference engine core is based on a Bayesian model implemented to (i) allow diverse source of experimental data and metadata to be systematically incorporated into the model with alternative ways to calculate the likelihood function and (ii) allow sensitive selection of biologically meaningful biochemical reaction databases as Dirichlet-categorical prior distribution. Additionally, to ensure result interpretation by system biologists, we display the annotation in a network where observed mass peaks are connected if their candidate metabolites are substrate/product of known biochemical reactions. This graph can be overlaid with other graph-based analysis, such as partial correlation networks, in a visualization scheme exported to Cytoscape, with web and stand-alone versions.#R##N##R##N#Availability and implementation: ProbMetab was implemented in a modular manner to fit together with established upstream (xcms, CAMERA, AStream, mzMatch.R, etc) and downstream R package tools (GeneNet, RCytoscape, DiffCorr, etc). ProbMetab, along with extensive documentation and case studies, is freely available under GNU license at: http://labpib.fmrp.usp.br/methods/probmetab/.#R##N##R##N#Contact: rb.psu@oicnevr#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online. | ['Ricardo Pianta Rodrigues da Silva', 'Fabien Jourdan', 'Diego M. Salvanha', 'Fabien Letisse', 'Emilien L. Jamin', 'Simone Guidetti-Gonzalez', 'Carlos Alberto Labate', 'Ricardo Z. N. Vêncio'] | ProbMetab: an R package for Bayesian probabilistic annotation of LC-MS based metabolomics | 524,629 |
A structure-based approach for ontology partitioning. | ['Flora Amato', 'Aniello De Santo', 'Vincenzo Moscato', 'Fabio Persia', 'Antonio Picariello', 'Silvestro Roberto Poccia', 'Giancarlo Sperlì'] | A structure-based approach for ontology partitioning. | 797,798 |
Collaboration, Openness, Transparency and Trust as Prerequisite for High Quality, Effective and Efficient Health Care. | ['Thomas Karopka', 'Syed Mohamed Aljunid', 'Nurhizam Safie', 'Luis Falcon', 'Holger Schmuhl', 'Kjeld Lisby'] | Collaboration, Openness, Transparency and Trust as Prerequisite for High Quality, Effective and Efficient Health Care. | 732,722 |
The study of subjective visual quality, and the development of computed quality metrics, require accurate and meaningful measurement of visual impairment. A natural unit for impairment is the JND (just-noticeable-difference). In many cases, what is required is a measure of an impairment scale, that is, the growth of the subjective impairment, in JNDs, as some physical parameter (such as amount of artifact) is increased. Measurement of sensory scales is a classical problem in psychophysics. In the method of pair comparison, each trial consists of a pair of samples and the observer selects the one perceived to be greater on the relevant scale. This may be regarded as an extension of the method of forced-choice: from measurement of threshold (one JND), to measurement of the larger sensory scale (multiple JNDs). While simple for the observer, pair comparison is inefficient because if all samples are compared, many comparisons will be uninformative. In general, samples separated by about 1 JND are most informative. We have developed an efficient adaptive method for selection of sample pairs. As with the QUEST adaptive threshold procedure[1], the method is based on Bayesian estimation of the sensory scale after each trial. We call the method Efficient Adaptive Scale Estimation, or EASE ("to make less painful"). We have used the EASE method to measure impairment scales for digital video. Each video was derived from an original source (SRC) by the addition of a particular artifact, produced by a particular codec at a specific bit rate, called a hypothetical reference circuit (HRC). Different amounts of artifact were produced by linear combination of the source and compressed videos. On each pair-comparison trial the observer selected which of two sequences, containing different amounts of artifact, appeared more impaired. The scale is estimated from the pair comparison data using a maximum likelihood method. At the top of the scale, when all of the artifact is present, the scale value is the total number of JNDs corresponding to that SRC/HRC condition. We have measured impairment scales for 25 video sequences, derived from five SRCs combined with each of five HRCs. We find that EASE is a reliable method for measuring impairment scales and JNDs for processed video sequences. We have compared our JND measurements with mean opinion scores for the same sequences obtained at one viewing distance using the DSCQS method by the Video Quality Experts Group (VQEG), and we find that the two measures are highly correlated. The advantages of the JND measurements are that they are in absolute and meaningful units and are unlikely to be subject to context effects. We note that JND measurements offer a means of creating calibrated artifact samples, and of testing and calibrating video quality models. 1. BACKGROUND 1.1. Need for accurate subjective measures of video quality The design and use of digital video systems entail difficult tradeoffs amongst various quantities, of which the two most important are cost and visual quality. While there is no difficulty in measuring cost, beauty remains locked in the eye of the beholder. However in recent years a number of computational metrics have been developed which purport to measure video quality or video impairment. Metrics of this sort would be very valuable in providing a means for automatically specifying, monitoring, and optimizing the visual quality of digital video. | ['Andrew Watson', 'Lindsay Kreslake'] | Measurement of visual impairment scales for digital video | 451,420 |
Topological changes are common in brain MR images for aging or disease studies. For deformable registration algorithms, which are formulated as a variational problem and solved by the minimization of certain energy functional, topological changes can cause false deformation in the resulting vector field, and affect algorithm convergence. In this work, we focus on the effect of topological changes on diffeomorphic and inverse-consistent deformable registration algorithms, specifically, diffeomorphic demons and symmetric LDDMM. We first use a simple example to demonstrate the adverse effect of topological changes on these algorithms. Then, we propose an novel framework that can be imposed onto any existing diffeomorphic and inverse-consistent deformable registration algorithm. Our framework renders these registration algorithms robust to topological changes, where the outputwill consist of two components. The first is a deformation field that presents only the brain structural change which is the expected vector field if the topological change did not exist. The second component is a label map that provides a segmentation of the topological changes appeared in input images. | ['Xiaoxing Li', 'Christopher L. Wyatt'] | Modeling topological changes in deformable registration | 115,846 |
Automatic Page Turner Machine for High-speed Book Digitization | ['Miho Tamei', 'Masahiro Yamada', 'Yoshihiro Watanabe', 'Masatoshi Ishikawa'] | Automatic Page Turner Machine for High-speed Book Digitization | 868,661 |
In mobile wireless personal area networks (WPAN), the position of each node changes over time. A network protocol that is able to dynamically update its links in order to maintain strong connectivity is said to be "self-reconfiguring." We propose a mobile wireless personal area networks (WPAN) design method with self-reconfiguring protocol for power efficiency. The WPAN is self-organized to clusters using an unsupervised clustering method, fuzzy c-means. A fuzzy logic system is applied to master/controller election for each cluster. A self-reconfiguring topology is proposed to manage the mobility and recursively update the network topology. We also modify the mobility management scheme with hysteresis to overcome the ping-pong effect. Simulation results show that our scheme performs much better than the existing algorithm. | ['Qilian Liang'] | Designing power aware self-reconfiguring topology for mobile wireless personal area networks using fuzzy logic | 439,939 |
Recently, the Degrees-of-Freedom (DoFs) region of multiple-input-single-output (MISO) networks with imperfect channel state information at the transmitter (CSIT) has attracted significant attention. An achievable scheme, known as rate-splitting (RS), integrates common-message-multicasting and private-message-unicasting. In this paper, focusing on the general $K$ -cell MISO IC with an arbitrary CSIT quality of each interfering link, we first identify the DoF region achieved by RS. Second, we introduce a novel scheme, so called topological RS (TRS), whose novelties compared with RS lie in a multi-layer structure and in transmitting multiple common messages to be decoded by groups of users rather than all users. The design of TRS is motivated by a novel interpretation of the $K$ -cell IC with imperfect CSIT as a weighted sum of a series of partially connected networks. We show that the DoF region achieved by TRS yields the best known result so far, and we find the maximal sum DoF via hypergraph fractional packing. Finally, for a realistic scenario where each user is connected to three dominant transmitters, we identify the sufficient condition where TRS strictly outperforms conventional schemes, and show that TRS is optimal for some CSIT qualities. | ['Chenxi Hao', 'Bruno Clerckx'] | MISO Networks With Imperfect CSIT: A Topological Rate-Splitting Approach | 622,749 |
We consider convex nonsmooth optimization problems where additional information with uncontrolled accuracy is readily available. It is often the case when the objective function is itself the output of an optimization solver, as for large-scale energy optimization problems tackled by decomposition. In this paper, we study how to incorporate the uncontrolled linearizations into (proximal and level) bundle algorithms in view of generating better iterates and possibly accelerating the methods. We provide the convergence analysis of the algorithms using uncontrolled linearizations, and we present numerical illustrations showing they indeed speed up resolution of two stochastic optimization problems coming from energy optimization (two-stage linear problems and chance-constrained problems in reservoir management). | ['Jérôme Malick', 'Welington de Oliveira', 'Sofia Zaourar'] | Uncontrolled inexact information within bundle methods | 581,929 |
Eye movements "steal" response time effect during imagery scanning. | ['Roger Johansson', 'Jana Holsanova'] | Eye movements "steal" response time effect during imagery scanning. | 802,939 |
Compressed sensing theory promises to sample sparse signals using a limited number of samples. It also resolves the problem of under-determined systems of linear equations when the unknown vector is sparse. Those promising applications induced a growing interest for this field in the past decade. In compressed sensing, the sparse signal estimation is performed using the knowledge of the dictionary used to sample the signal. However, dictionary mismatch often occurs in practical applications, in which case the estimation algorithm uses an uncertain dictionary knowledge. This mismatch introduces an estimation bias even when the noise is low and the support (i.e. location of non-zero amplitudes) is perfectly estimated. In this paper we consider that the dictionary suffers from a structured mismatch, this type of error being of particular interest in sparse estimation applications. We propose the Bias-Correction Estimator (BiCE) post-processing step which enhances the non-zero amplitude estimation of any sparse-based estimator in the presence of a structured dictionary mismatch. We give the theoretical Bayesian Mean Square Error of the proposed estimator and show its statistical efficiency in the low noise variance regime. | ['Stephanie Bernhardt', 'Remy Boyer', 'Sylvie Marcos', 'Pascal Larzabal'] | Sparse-based estimators improvement in case of Basis mismatch | 604,949 |
For many applications, to reduce the processing time and the cost of decision making, we need to reduce the number of sensors, where each sensor produces a set of features. This sensor selection problem is a generalized feature selection problem. Here, we first present a sensor (group-feature) selection scheme based on Multi-Layered Perceptron Networks. This scheme sometimes selects redundant groups of features. So, we propose a selection scheme which can control the level of redundancy between the selected groups. The idea is general and can be used with any learning scheme. We have demonstrated the effectiveness of our scheme on several data sets. In this context, we define different measures of sensor dependency (dependency between groups of features). We have also presented an alternative learning scheme which is more effective than our old scheme. The proposed scheme is also adapted to radial basis function (RBS) network. The advantages of our scheme are threefold. It looks at all the groups together and hence can exploit nonlinear interaction between groups, if any. Our scheme can simultaneously select useful groups as well as learn the underlying system. The level of redundancy among groups can also be controlled. | ['Rudrasis Chakraborty', 'Chin-Teng Lin', 'Nikhil R. Pal'] | SENSOR (GROUP FEATURE) SELECTION WITH CONTROLLED REDUNDANCY IN A CONNECTIONIST FRAMEWORK | 487,621 |
Effects of a mood and an unrecognized hint on insight problem solving. | ['Ryo Orita', 'Masasi Hattori'] | Effects of a mood and an unrecognized hint on insight problem solving. | 806,355 |
This paper presents a low-cost and practical approach to achieve basic input using a tactile cube-shaped object, augmented with a set of sensors, processor, batteries and wireless communication. The algorithm we propose combines a finite state machine model incorporating prior knowledge about the symmetrical structure of the cube, with maximum likelihood estimation using multivariate Gaussians. The claim that the presented solution is cheap, fast and requires few resources, is demonstrated by implementation in a small-sized, microcontroller-driven hardware configuration with inexpensive sensors. We conclude with a few prototyped applications that aim at characterizing how the familiar and elementary shape of the cube allows it to be used as an interaction device. | ['Kristof Van Laerhoven', 'Nicolas Villar', 'A. Schmidt', 'Gerd Kortuem', 'Hans-Werner Gellersen'] | Using an autonomous cube for basic navigation and input | 334,583 |
In this paper, a genetic algorithm (GA)-based optimisation technique for controllers of two actuator-based levitation system has been discussed. GA has a highly proven track record of optimisation of parameters for different types of control schemes. Any electromagnetic levitation system (EMLS) is inherently unstable and strongly non-linear in nature. Controllers based on linear model and designed by classical approach for any EMLS have restricted zone of operation. For a small variation of operating air-gap, there is sharp degradation of controller performance. But it is essential to design an optimised controller that will stabilise unstable EMLS and will provide satisfactory performance for a wide range of operating air-gap. This paper focuses mainly on an optimal control of a proposed two actuator-based EMLS scheme, which is composed of a stochastic technique based on GA. | ['Rupam Bhaduri', 'Subrata Banerjee'] | Optimisation of controller parameters by genetic algorithm for an electromagnetic levitation system | 270,433 |
Lying Pose Recognition for Elderly Fall Detection | ['Simin Wang', 'Salim Zabir', 'Bastian Leibe'] | Lying Pose Recognition for Elderly Fall Detection | 682,980 |
Device mismatch in a mixer is generally believed to be the major contributor of second-order distortion that limits the performance of a direct conversion receiver. In this brief, we show that even with perfect matching, leakage at local oscillator frequency prior to mixing creates large second-order distortion when the third-order input intercept point of the receiver is not sufficiently large. Measurement data from a quad-band global system for mobile communications/global packet radio service transceiver implemented in 90-nm digital CMOS process is also presented to support our claim | ['Imtinan Elahi', 'Khurram Muhammad', 'Poras T. Balsara'] | IIP2 and DC Offsets in the Presence of Leakage at LO Frequency | 249,819 |
The Kneed Walker is a physics-based model derived from a planar biomechanical characterization of human locomotion. By controlling torques at the knees, hips and torso, the model captures a full range of walking motions with foot contact and balance. Constraints are used to properly handle ground collisions and joint limits. A prior density over walking motions is based on dynamics that are optimized for efficient cyclic gaits over a wide range of natural human walking speeds and step lengths, on different slopes. The generative model used for monocular tracking comprises the Kneed Walker prior, a 3D kinematic model constrained to be consistent with the underlying dynamics, and a simple measurement model in terms of appearance and optical flow. The tracker is applied to people walking with varying speeds, on hills, and with occlusion. | ['Marcus A. Brubaker', 'David J. Fleet'] | The Kneed Walker for human pose tracking | 48,600 |
By combining program logic and static analysis, we present an automatic approach to construct program proofs to be used in Proof-Carrying Code. We use Hoare logic in representing the proofs of program properties, and the abstract interpretation in computing the program properties. This combination automatizes proof construction; an abstract interpretation automatically estimates program properties (approximate invariants) of our interest, and our proof-construction method constructs a Hoare-proof for those approximate invariants. The proof-checking side (code consumer's side) is insensitive to a specific static analysis; the assertions in the Hoare proofs are always first-order logic formulas for integers, into which we first compile the abstract interpreters' results. Both the property-compilation and the proof construction refer to the standard safety conditions that are prescribed in the abstract interpretation framework. We demonstrate this approach for a simple imperative language with an example property being the integer ranges of program variables. We prove the correctness of our approach, and analyze the size complexity of the generated proofs. | ['Sunae Seo', 'Hongseok Yang', 'Kwangkeun Yi'] | Automatic Construction of Hoare Proofs from Abstract Interpretation Results | 178,792 |
A new image coding scheme called dynamic codebook adaptive vector quantization (DCAVQ) is proposed. DCAVQ is designed to minimize the transmission overhead and computational complexity of existing adaptive vector quantization systems. Simulation results using real images demonstrate both subjective and objective performance of decoded images are improved using the proposed method. On the other hand, the execution time of the system is being comparable to that of a standard universal VQ system. | ['Jian Feng', 'Kwok-Tung Lo'] | Dynamic codebook adaptive vector quantization for image coding | 437,069 |
Developing a Leading Digital Multi-sided Platform: Examining IT Affordances and Competitive Actions in Alibaba.com | ['Ter Chian Felix Tan', 'Barney Tan', 'Shan Ling Pan'] | Developing a Leading Digital Multi-sided Platform: Examining IT Affordances and Competitive Actions in Alibaba.com | 723,193 |
Sociolinguists are regularly faced with the task of measuring phonetic features from speech, which involves manually transcribing audio recordings ‐ a major bottleneck to analyzing large collections of data. We harness automatic speech recognition to build an online end-to-end web application where users upload untranscribed speech collections and receive formant measurements of the vowels in their data. We demonstrate this tool by using it to automatically analyze President Barack Obama’s vowel pronunciations. | ['Sravana Reddy', 'James N. Stanford'] | A Web Application for Automated Dialect Analysis | 614,394 |
The design of analog circuits by hand is a difficult task, and many successful approaches to automating this design process based on evolutionary computation have been proposed. The fitness evaluations necessary to evolve linear analog circuits are relatively straightforward. However, this is not the case for nonlinear analog circuits, especially for the most general class of design tasks: reverse-engineering an arbitrary nonlinear 'black box' circuit. Here, we investigate different approaches to fitness evaluations in this setting. Results show that an incremental algorithm outperforms naive approaches, and that it is possible to evolve robust nonlinear analog circuits with time-domain output behavior that closely matches that of black box circuits for any time-domain input. | ['Theodore W. Cornforth', 'Hod Lipson'] | Reverse-Engineering Nonlinear Analog Circuits with Evolutionary Computation | 607,431 |
Many image processing applications require fast convolution of an image with a set of large 2D filters. Field-programmable gate arrays (FPGAs) are often used to achieve this goal due to their fine grain parallelism and reconfigurability. This paper presents a novel algorithm for the class of designs that implement a convolution with a set of 2D filters. Firstly, it explores the heterogeneous nature of modern reconfigurable devices using a singular value decomposition based algorithm, which orders the coefficients according to their impact to the filters' approximation. Secondly, it exploits any redundancy that exists within each filter and between different filters in the set, leading to designs with minimized area. Experiments with real filter sets from computer vision applications demonstrate up to 60% reduction in the required area. | ['Christos-Savvas Bouganis', 'Peter Y. K. Cheung', 'George A. Constantinides'] | Heterogeneity exploration for multiple 2D filter designs | 73,212 |
Traditional CSCW (computer-supported cooperative work) systems concentrate WYSIWIS (what you see is what i see) in order to support as many facilities as those in face-to-face collaboration environments. However, many obstacles made these efforts not so successful as was expected. Therefore, we need to consider the other aspects of a CSCW system that can better support collaboration among people, that is, collaboration might be made more productive and more efficient by special computer systems facilities such as role specification. This paper reviews the CSCW systems with concentration on the group awareness support, proposes a new classification of group awareness and illustrates that in collaborative systems WYSINWIS (what you see is not what i see) is as important and constructive as WYSIWIS. Then, it proposes role-based collaboration and role specification mechanisms that can be used to a new method to support WYSINWIS. | ['Haibin Zhu'] | From WYSIWIS to WISINWIS: role-based collaboration | 205,947 |
This paper describes a novel low voltage low power resonant amplifier-based sub-harmonic mixer using current-reuse-bleeding technique for zero-IF transceiver systems applications. The novel resonant amplifier-based sub-harmonic balun is designed and used in the mixer, which can double the frequency of the local oscillation (LO) signal. Moreover, the sub-harmonic balun can provide a pair of double frequency LO signals, unlike the conversional mixers, the novel sub-harmonic requires only one low power LO input. The proposed mixer delivers a remarkable conversion gain of 14.5 dB with local oscillator (LO) power of −2 dBm, and its power consumption is 0.65 mW with 0.8 V supply voltage. The input-referred third-order intercept point (IIP3) of the mixer is 1 dBm, and the chip area is only 0.52 mm2. | ['Jie Jin'] | Resonant amplifier-based sub-harmonic mixer for zero-IF transceiver applications | 951,902 |
Correlated interval representations of range uncertainty offer an attractive solution to approximating computations on statistical quantities. The key idea is to use finite intervals to approximate the essential mass of a probability density function (pdf) as it moves through numerical operators; the resulting compact interval-valued solution can be easily interpreted as a statistical distribution and efficiently sampled. This paper first describes improved interval-valued algorithms for asymptotic wave evaluation (AWE)/passive reduced-order interconnect macromodeling algorithm (PRIMA) model order reduction for tree-structured interconnect circuits with correlated resistance, inductance, and capacitance (RLC) parameter variations. By moving to a much faster interval-valued linear solver based on path-tracing ideas, and making more optimal tradeoffs between interval- and scalar-valued computations, the delay statistics roughly 10/spl times/ faster than classical Monte Carlo (MC) simulation, with accuracy to within 5% can be extracted. This improved interval analysis strategy is further applied in order to build statistical effective capacitance (C/sub eff/) models for variational interconnect, and show how to extract statistics of C/sub eff/ over 100/spl times/ faster than classical MC simulation, with errors less than 4%. | ['James D. Ma', 'Rob A. Rutenbar'] | Fast interval-valued statistical modeling of interconnect and effective capacitance | 389,469 |
Design and implementation status of an Anesthesia Information Management System. | ['D. Zogogianni', 'Aris Tzavaras', 'Basile Spyropoulos'] | Design and implementation status of an Anesthesia Information Management System. | 789,179 |
In a code division multiple access (CDMA) system, signal detection under multipath distortion typically requires estimation of unknown channel parameters first. In such a scenario, performance of receivers highly relies on the accuracy of channel estimates. In this paper, effects of channel estimation errors on the performance of linear CDMA receivers due to finite data samples are studied when channel parameters are estimated blindly by a recently proposed covariance-matching technique. Those receivers include zero-forcing (ZF), direct matrix-inversion (DMI) minimum mean-square-error (MMSE), subspace MMSE, and RAKE receivers. Their output signal-to-interference-plus-noise ratios (SINRs) and bit-error-rates (BERs) are adopted for performance measures. Expressions for performance indicators under such an imperfect condition are derived from a perturbation perspective and verified by simulation examples. | ['Zhengyuan Xu'] | Effects of imperfect blind channel estimation on performance of linear CDMA receivers | 476,266 |
We present a method of constructing rate-compatible polar codes that are capacity-achieving with low-complexity sequential decoders. The proposed code construction allows for incremental retransmissions at different rates in order to adapt to channel conditions. The main idea of the construction exploits certain common characteristics of polar codes that are optimized for a sequence of degraded channels. The proposed approach allows for an optimized polar code to be used at every transmission thereby achieving capacity. Due to the length limitation of conventional polar codes, the proposed construction can only support a restricted set of rates that is characterized by the size of the kernel when conventional polar codes are used. We thus consider punctured polar codes which provide more flexibility on block length by controlling a puncturing fraction. We show the existence of capacity-achieving punctured polar codes for any given puncturing fraction. Using punctured polar codes as constituent codes, we show that the proposed rate-compatible polar code is capacity-achieving for an arbitrary sequence of rates and for any class of degraded channels. | ['Song-Nam Hong', 'Dennis Hui', 'Ivana Maric'] | Capacity-achieving rate-compatible polar codes | 573,307 |
In this paper we propose a method for finding the reference set of a decision making unit (DMU), without chasing down all alternative optimal solutions of the envelopment form, which is a strong degenerate problem. The reference set is useful as a benchmark for an inefficient DMU, for identifying the status of returns to scale, ranking of DMUs and so on. Lastly, numerical examples are shown to illustrate our proposed approach. | ['Gholam Reza Jahanshahloo', 'A. Shirzadi', 'S.M. Mirdehghan'] | FINDING THE REFERENCE SET OF A DECISION MAKING UNIT | 145,597 |
Present-day developments, in electrical power transmission and distribution, require considerations of the status quo. In other meaning, international regulations enforce increasing of reliability and reducing of environment impact, correspondingly they motivate developing of dependable systems. Power grids especially intelligent (smart grids) ones become industrial solutions that follow standardized development. The International standardization, in the field of power transmission and distribution, improve technology influences. The rise of dedicated standards for SAS (Substation Automation Systems) communications, such as the leading International Electro-technical Commission standard IEC 61850, enforces modern technological trends in this field. Within this standard, a constraint of low ETE (End-to-End) latency should be respected, and time-critical status transmission must be achieved. This experimental study emphasis on IEC 61850 SAS communication standard, e.g. IEC 61850 GOOSE (Generic Object Oriented Substation Events), to implement an investigational method to determine the protection communication delay. This method observes GOOSE behaviour by adopting monitoring and analysis capabilities. It is observed by using network test equipment, i.e. SPAN (Switch Port Analyser) and TAP (Test Access Point) devices, with on-the-shelf available hardware and software solutions. | ['Ahmed Altaher', 'Stéphane Mocanu', 'Jean-Marc Thiriet'] | Evaluation of Time-Critical Communications for IEC 61850-Substation Network Architecture | 637,026 |
The architecture of the Internet is based on a number of principles, including the self-describing datagram packet, the end to end arguments, diversity in technology and global addressing. As the Internet has moved from a research curiosity to a recognized component of mainstream society, new requirements have emerged that suggest new design principles, and perhaps suggest that we revisit some old ones. This paper explores one important reality that surrounds the Internet today: different stakeholders that are part of the Internet milieu have interests that may be adverse to each other, and these parties each vie to favor their particular interests. We call this process "the tussle". Our position is that accommodating this tussle is crucial to the evolution of the network's technical architecture. We discuss some examples of tussle, and offer some technical design principles that take it into account. | ['David D. Clark', 'John Wroclawski', 'Karen R. Sollins', 'Robert Braden'] | Tussle in cyberspace: defining tomorrow's internet | 397,518 |
ABSTRACTEvent-triggering strategy is one of the real-time control implementation techniques which aims at achieving minimum resource utilisation while ensuring the satisfactory performance of the closed-loop system. In this paper, we address the problem of robust stabilisation for a class of nonlinear systems subject to external disturbances using sliding mode control (SMC) by event-triggering scheme. An event-triggering scheme is developed for SMC to ensure the sliding trajectory remains confined in the vicinity of sliding manifold. The event-triggered SMC brings the sliding mode in the system and thus the steady-state trajectories of the system also remain bounded within a predesigned region in the presence of disturbances. The design of event parameters is also given considering the practical constraints on control execution. We show that the next triggering instant is larger than its immediate past triggering instant by a given positive constant. The analysis is also presented with taking delay into a... | ['Abhisek K. Behera', 'B. Bandyopadhyay'] | Event-triggered sliding mode control for a class of nonlinear systems | 657,733 |
In applications where abstract models of reactive systems are to be inferred, one important challenge is that the behavior of such systems can be inherently nondeterministic. To cope with this challenge, we developed an algorithm to infer nondeterministic computation models in the form of Mealy machines. We introduce our approach and provide extensive experimental results to assess its potential in the identication of black-box reactive systems. The experiments involve both articially-gener ated abstract Mealy machines, and the identication of a TFTP server model starting from a publicly-available implementation. | ['Ali Khalili', 'Armando Tacchella'] | Learning Nondeterministic Mealy Machines | 202,275 |
Functional brain networks reconfigure spontaneously during rest. Such network dynamics can be studied by dynamic functional connectivity (dynFC); i.e., sliding-window correlations between regional brain activity. Key parameters-such as window length and cut-off frequencies for filtering-are not yet systematically studied. In this letter we provide the fundamental theory from signal processing to address these parameter choices when estimating and interpreting dynFC. We guide the reader through several illustrative cases, both simple analytical models and experimental fMRI BOLD data. First, we show how spurious fluctuations in dynFC can arise due to the estimation method when the window length is shorter than the largest wavelength present in both signals, even for deterministic signals with a fixed relationship. Second, we study how real fluctuations of dynFC can be explained using a frequency-based view, which is particularly instructive for signals with multiple frequency components such as fMRI BOLD, demonstrating that fluctuations in sliding-window correlation emerge by interaction between frequency components similar to the phenomenon of beat frequencies. We conclude with practical guidelines for the choice and impact of the window length. (C) 2014 Elsevier Inc. All rights reserved. | ['Nora Leonardi', 'Dimitri Van De Ville'] | On spurious and real fluctuations of dynamic functional connectivity during rest | 247,291 |
This paper investigates the example utilization problem in query-by-example spoken term detection when multiple examples are provided for each query term. To achieve this goal, we propose three evaluation metrics to assess the quality of all the examples, namely posteriorgram stability score, pronunciation reliability score and local similarity score. We also present a clustering based example generation approach to creating better examples based on the original ones. Experiments conducted on a telephone speech corpus shows that it is better to use several representative examples selected by the quality assessment process than to simply use all the examples. Furthermore, even better results can be obtained if the generated examples are used. | ['J. Xu', 'Ge Zhang', 'Yonghong Yan'] | Effective utilization of multiple examples in query-by-example spoken term detection | 801,953 |
Battery lifetime prediction for energy-aware computing | ['Daler N. Rakhmatov', 'Sarma B. K. Vrudhula', 'Deborah A. Wallach'] | Battery lifetime prediction for energy-aware computing | 545,052 |
A Haskell-Implementation of STM Haskell with Early Conflict Detection. | ['David Sabel'] | A Haskell-Implementation of STM Haskell with Early Conflict Detection. | 796,324 |
Entangled photon source is an essential tool for quantum information experiments, such as tests of Bell inequalities, quantum teleportation, and entanglement swapping. We report the generation of polarization-entangled photon pairs using grating structures of nonlinear materials such as Ti-diffused Lithium Niobate. This approach is useful for quantum integrated optics because the size of this element is less than 325 micron. Polarization-entangled photon pairs are created using the nonlinear optical process of type-II spontaneous parametric down-conversion. The pump laser at 800 nm was doubled to a wavelength of 400 nm, which was used to produce SPDC in the grating structure of Lithium Niobate. The entangled photon pairs were generated at wavelengths 692 nm and 950 nm. The used grating structure consists of two sections. Grating period and length of the first section are 162 nm and 129.6 micron, respectively. Grating period of the second section is 193 nm and its length is equal to 193 micron. | ['Shamsolah Salemian', 'Shahram Mohammadnejad'] | THE GENERATION OF POLARIZATION-ENTANGLED PHOTON PAIRS USING GRATING STRUCTURES OF Ti-DIFFUSED LITHIUM NIOBATE WAVEGUIDES IN INTEGRATED OPTICS | 546,603 |
A Reservoir Balancing Constraint with Applications to Bike-Sharing | ['Joris Kinable'] | A Reservoir Balancing Constraint with Applications to Bike-Sharing | 835,902 |
Computational modeling of biological systems is becoming increasingly common as scientists attempt to understand biological phenomena in their full complexity. Here we distinguish between two types of biological models --- mathematical and computational--- according to their different representations of biological phenomena and their diverse potential. We call the approach of constructing computational models of biological systems Executable Biology , as it focuses on the design of executable computer algorithms that mimic biological phenomena. We give an overview of the main modeling efforts in this direction, and discuss some of the new challenges that executable biology poses for computer science and biology. We argue that for executable biology to reach its full potential as a mainstream biological technique, formal and algorithmic approaches must be integrated into biological research, driving biology towards a more precise engineering discipline. | ['Jasmin Fisher', 'Thomas A. Henzinger'] | Executable biology | 670,479 |
Ambiguity in natural language requirements has long been recognized as an inevitable challenge in requirements engineering (RE). Various initiatives have been taken by RE researchers to address the challenges of ambiguity. In this paper the results of a mapping study are presented that focus on the application of Natural Language Processing (NLP) techniques for addressing ambiguity in requirements. Systematic review of the literature resulted in 174 studies on the subject published during 1995 to 2015, and out of these only 28 are empirically evaluated studies that were selected. From of the resulting set of papers, 81% have focused on detecting ambiguity; whereas 4% and 5% are focusing on reducing and removing ambiguity respectively. Addressing syntactic, semantic, and lexical ambiguities has attracted more attention than other types. In spite of all the research efforts, there is a lack of empirical evaluation of NLP tools and techniques for addressing ambiguity in requirements. The results have pointed out some gaps in empirical results and have raised questions the designing of an analytical framework for research in this field. | ['Muneera Bano'] | Addressing the challenges of requirements ambiguity: A review of empirical literature | 641,112 |
The University of North Dakota is developing airspace within the state where Unmanned Aircraft Systems (UASs) can be flown without an onboard sense and avoid system or Temporary Flight Restrictions (TFRs). With funding from the U.S. Air Force, a mobile ground-based radar system capable of detecting aircraft operating in Class E airspace and the software to display such information to UAS operators is being developed. The current system uses an Automatic Dependent Surveillance – Broadcast (ADS-B) transceiver to detect any ADS-B-equipped aircraft within the vicinity, and a Ground Control Station (GCS) to detect and control the UAS. Once one or more ground-based radars are integrated into the system, it will also be capable of detecting non-cooperative aircraft (i.e. aircraft that aren’t equipped with ADS-B transceivers) operating within the vicinity. The current system uses a portable, high-availability architecture. Since the system is intended to detect potential airspace conflicts from the ground, greater computational power is available to it than to onboard sense and avoid systems. The probability of a midair collision is dependent on the proximity of aircraft to each other, the performance characteristics of the aircraft, and the probabilities of pilots performing basic maneuvers with the aircraft. In this paper the authors present the results of data mining an ADS-B data set from 11 days in early 2010. Probabilistic models of pilot behavior were automatically extracted from the data using a genetic algorithm for cluster analysis. | ['Ronald Marsh', 'Kirk Ogaard'] | Mining Heterogeneous ADS-B Data Sets for Probabilistic Models of Pilot Behavior | 188,619 |
One of the most important policies adopted in inventory control is the ( R , S ) policy (also known as the "replenishment cycle" policy). Under the non-stationary demand assumption the ( R , S ) policy takes the form ( R n , S n ) where R n denotes the length of the n th replenishment cycle, and S n the corresponding order-up-to-level. Such a policy provides an effective means of damping planning instability and coping with demand uncertainty. In this paper we develop a CP approach able to compute optimal ( R n , S n ) policy parameters under stochastic demand, ordering, holding and shortage costs. The convexity of the cost-function is exploited during the search to compute bounds. We use the optimal solutions to analyze the quality of the solutions provided by an approximate MIP approach that exploits a piecewise linear approximation for the cost function. | ['Roberto Rossi', 'Armagan Tarim', 'Brahim Hnich', 'Steven David Prestwich'] | Replenishment Planning for Stochastic Inventory Systems with Shortage Cost | 331,997 |
During the past several years, several strategies have been proposed for control of joint movement in paraplegic subjects using functional electrical stimulation (FES), but developing a control strategy that provides satisfactory tracking performance, to be robust against time-varying properties of muscle-joint dynamics, day-to-day variations, subject-to-subject variations, muscle fatigue, and external disturbances, and to be easy to apply without any re-identification of plant dynamics during different experiment sessions is still an open problem. In this paper, we propose a novel control methodology that is based on synergistic combination of neural networks with sliding-mode control (SMC) for controlling FES. The main advantage of SMC derives from the property of robustness to system uncertainties and external disturbances. However, the main drawback of the standard sliding modes is mostly related to the so-called chattering caused by the high-frequency control switching. To eliminate the chattering, we couple two neural networks with online learning without any offline training into the SMC. A recurrent neural network is used to model the uncertainties and provide an auxiliary equivalent control to keep the uncertainties to low values, and consequently, to use an SMC with lower switching gain. The second neural network consists of a single neuron and is used as an auxiliary controller. The control law will be switched from the SMC to neural control, when the state trajectory of system enters in some boundary layer around the sliding surface. Extensive simulations and experiments on healthy and paraplegic subjects are provided to demonstrate the robustness, stability, and tracking accuracy of the proposed neuroadaptive SMC. The results show that the neuro-SMC provides accurate tracking control with fast convergence for different reference trajectories and could generate control signals to compensate the muscle fatigue and reject the external disturbance. | ['Arash Ajoudani', 'Abbas Erfanian'] | A Neuro-Sliding-Mode Control With Adaptive Modeling of Uncertainty for Control of Movement in Paralyzed Limbs Using Functional Electrical Stimulation | 370,909 |
Many applications are concurrent and communicate over a network. The non-determinism in the thread and communication schedules makes it desirable to model check such systems. However, a simple state space exploration scheme is not applicable, as backtracking results in repeated communication operations. A cache-based approach solves this problem by hiding redundant communication operations from the environment. In this work, we propose a change from a linear-time to a branching-time cache, allowing us to relax restrictions in previous work regarding communication traces that differ between schedules. We successfully applied the new algorithm to real-life programs where a previous solution is not applicable. | ['Cyrille Artho', 'Watcharin Leungwattanakit', 'Masami Hagiya', 'Yoshinori Tanabe', 'Mitsuharu Yamamoto'] | Cache-Based Model Checking of Networked Applications: From Linear to Branching Time | 74,537 |
In this paper, we analyze the effect of model adaptation for dialog act tagging. The goal of adaptation is to improve the performance of the tagger using out-of-domain data or models. Dialog act tagging aims to provide a basis for further discourse analysis and understanding in conversational speech. In this study we used the ICSI meeting corpus with high-level meeting recognition dialog act (MRDA) tags, that is, question, statement, backchannel, disruptions, and floor grabbers/holders. We performed controlled adaptation experiments using the Switchboard (SWBD) corpus with SWBD-DAMSL tags as the out-of-domain corpus. Our results indicate that we can achieve significantly better dialog act tagging by automatically selecting a subset of the Switchboard corpus and combining the confidences obtained by both in-domain and out-of-domain models via logistic regression, especially when the in-domain data is limited. | ['Gokhan Tur', 'Umit Guz', 'Dilek Hakkani-Tür'] | MODEL ADAPTATION FOR DIALOG ACT TAGGING | 127,996 |
The interesting auxiliary algorithm for simplex method by Luh and Tsaih (Computers and Operations Research 29 (2002)) and its modified version by B. J. Chaderjian and T. Gao (Computers and Operations Research 30 (2003)) present an effective method to start an initial basic feasible solution from an interior feasible solution. We modify the algorithm in the above reference. By using QR decomposition, a much smaller (n–m–k)×(n–m–k) matrix, in stead of a n×n matrix, is handled in kth iteration. The QR factors at each iteration can be obtained from its predecessor cheaply by an updating process. This substantially improve the efficiency of the algorithm proposed by Luh and Tsaih. | ['Wei Li'] | On auxiliary algorithm for the simplex method by h. luh and r. tsaih | 594,426 |
This thesis contains a collection of algorithms for working with the twisted groups of Lie type known as Suzuki groups, and small and large Ree groups. The two main problems under consideration are constructive recognition and constructive membership testing. We also consider problems of generating and conjugating Sylow and maximal subgroups. The algorithms are motivated by, and form a part of, the Matrix Group Recognition Project. Obtaining both theoretically and practically efficient algorithms has been a central goal. The algorithms have been developed with, and implemented in, the computer algebra system MAGMA. | ['Henrik Bäärnhielm'] | Algorithmic problems in twisted groups of Lie type | 451,312 |
Heterogeneous multi-core processors, such as the IBM Cell processor, can deliver high performance. However, these processors are notoriously difficult to program: different cores support different instruction set architectures, and the processor as a whole does not provide coherence between the different cores' local memories. We present Hera-JVM, an implementation of the Java Virtual Machine which operates over the Cell processor, thereby making this platforms more readily accessible to mainstream developers. Hera-JVM supports the full Java language; threads from an unmodified Java application can be simultaneously executed on both the main PowerPC-based core and on the additional SPE accelerator cores. Migration of threads between these cores is transparent from the point of view of the application, requiring no modification to Java source code or bytecode. Hera-JVM supports the existing Java Memory Model, even though the underlying hardware does not provide cache coherence between the different core types. We examine Hera-JVM's performance under a series of real-world Java benchmarks from the SpecJVM, Java Grande and Dacapo benchmark suites. These benchmarks show a wide variation in relative performance on the different core types of the Cell processor, depending upon the nature of their workload. Execution of these benchmarks on Hera-JVM can achieve speedups of up to 2.25x by using one of the Cell processor's SPE accelerator cores, compared to execution on the main PowerPC-based core. When all six SPE cores are exploited, parallel workloads can achieve speedups of up to 13x compared to execution on the single PowerPC core. | ['Ross McIlroy', 'Joseph S. Sventek'] | Hera-JVM: a runtime system for heterogeneous multi-core architectures | 302,974 |
In this paper, a new perceptually adaptive method for reducing the blocking and ringing artifacts encountered in image compression is proposed. The method consists of three steps: (i) blocking-ringing artifacts detection, (ii) perceptual distortion measure and (iii) blocking-ringing artifacts reduction. The performance of the proposed method is evaluated objectively and subjectively in terms of image fidelity and blocking, ringing and blur effects reduction. The obtained results are very promising and confirm once more the efficiency of perceptual approaches in image processing. | ['Quoc Bao Do', 'Marie Luong', 'Azeddine Beghdadi'] | A new perceptually adaptive method for deblocking and deringing | 228,462 |
A Distributed Storage Model for Sensor Networks | ['Lee Luan Ling'] | A Distributed Storage Model for Sensor Networks | 807,595 |
Ein Annotationsansatz zur Unterstützung einer ganzheitlichen Geschäftsanalyse. | ['Sylvia Radeschütz', 'Florian Niedermann', 'Bernhard Mitschang'] | Ein Annotationsansatz zur Unterstützung einer ganzheitlichen Geschäftsanalyse. | 668,177 |
Most clickstream visualization techniques display web users' clicks by highlighting paths in a graph of the underlying web site structure. These techniques do not scale to handle high volume web usage data. Further, historical usage data is not considered. The work described in this paper differs from other work in the following aspect. Fuzzy clustering is applied to historical usage data and the result imaged in the form of a point cloud. Web navigation data from active users are shown as animated paths in this point cloud. It is clear that when many paths get attracted to one of the clusters, that particular cluster is currently “hot.” Further as sessions terminate, new sessions are incrementally incorporated into the point cloud. The complete process is closely coupled to the fuzzy clustering technique and makes effective use of clustering results. The method is demonstrated on a very large set of web log records consisting of over half a million page clicks. | ['Srinidhi Kannappady', 'Sudhir P. Mudur', 'Nematollaah Shiri'] | Clickstream visualization based on usage patterns | 317,385 |
We proposed a human-centered cyber-physical systematic approach for post-stroke monitoring. This system is composed of wearable inertial and physiological sensors as well as novel machine learning algorithms to analyze human behavior and physiological signals in the context of health caring for patients with cerebrovascular diseases such as stroke. | ['Hongan Wang', 'Xiaoming Deng', 'Feng Tian'] | WiP Abstract: A Human-Centered Cyber-Physical Systematic Approach for Post-Stroke Monitoring | 259,740 |
Development of a Dynamic and Adaptive Simulator for Health. | ['Adrian L. Correa-Arango', 'Carolina Tamayo-Correa', 'David Mejía-Zapata', 'Edison Castrillón', 'Ever A. Torres-Silva', 'Ivan F. Luna-Gomez', 'Natalia Restrepo', 'Sebastián Vélez-Zuluga', 'José F. Flórez-Arango', 'J. W. Smith'] | Development of a Dynamic and Adaptive Simulator for Health. | 753,966 |
The development and evaluation of Vehicular Ad Hoc Networks (VANET) and their applications is usually based on coupled simulation environments combining microscopic traffic models and packet level network simulations. However, it is difficult or rather impossible to build simulation scenarios where all the protocols, possible situations and various traffic conditions are properly modeled. This can be attributed to the common lack of information and fine grained control of traffic patterns, missing protocol implementations, and the performance problems of such interlinked simulators. Therefore, simplifications are usually applied on all levels of development and modeling. The novelty of this paper can be found in the effort to propose a framework integrating novel statistical information propagation and higher level communication protocol models into an overall hybrid simulator. The proposed simulation framework can be applied for protocol design and system analysis when it is difficult to build complex interlinked simulation scenarios. In our simulation framework, different models from different levels of operation are integrated. More exactly, this includes a macroscopic traffic model, an information propagation VANET model and discrete event-driven protocol models implemented in the MatLab/Simulink environment. The simulator is validated through various input parameters and scenarios. The results show good performance and interesting aspects of the hybrid simulator; thus, our framework provides a promising tool for development and evaluation of VANET protocols and applications. | ['Attila Török', 'Daniel Jozsef', 'Balázs Sonkoly'] | A Hybrid Simulation Framework for Modeling and Analysis of Vehicular Ad Hoc Networks | 349,995 |
Protein structure prediction has been a grand challenge problem in the structure biology over the last few decades. Protein quality assessment plays a very important role in protein structure prediction. In the paper, we propose a new protein quality assessment method which can predict both local and global quality of the protein 3D structural models. Our method uses both multi and single model quality assessment method for global quality assessment, and uses chemical, physical, geo-metrical features, and global quality score for local quality assessment. CASP9 targets are used to generate the features for local quality assessment. We evaluate the performance of our local quality assessment method on CASP10, which is comparable with two stage-of-art QA methods based on the average absolute distance between the real and predicted distance. In addition, we blindly tested our method on CASP11, and the good performance shows that combining single and multiple model quality assessment method could be a good way to improve the accuracy of model quality assessment, and the random forest technique could be used to train a good local quality assessment model. | ['Renzhi Cao', 'Taeho Jo', 'Jianlin Cheng'] | Evaluation of Protein Structural Models Using Random Forests | 644,724 |
Development and evaluation of teleoperators and haptic virtual environment technologies require objective quantitative measures of haptic quality. In this paper a theoretical framework and experimental procedures are introduced for quantitative assessment of haptic perception. Human haptic perception of objects is distorted and uncertain as well. Perceptual distortion is defined to be a systematic bias of perception with respect to an objective standard. Perceptual uncertainty is defined to be a measure of statistical distribution of percepts. A set of mathematical tools is presented to quantify perceptual distortion and uncertainty. Two experiments are presented which apply these tools to investigate the fundamental structure of human haptic perception. > | ['Ernest D. Fasse', 'Neville Hogan'] | Quantitative measurement of haptic perception | 507,688 |
Inducing symbolic rules from entity embeddings using auto-encoders | ['Thomas Ager', 'Ondrej Kuzelka', 'Steven Schockaert'] | Inducing symbolic rules from entity embeddings using auto-encoders | 906,971 |
In today’s web applications asynchronous requests to remote services using callbacks or futures are omnipresent. The continuation of such a non-blocking task is represented as a callback function that will later be called with the result of the request. This style of programming where the remainder of a computation is captured in a continuation function is called continuation-passing style (CPS). This style of programming can quickly lead to a phenomenon called “call- back hell”, which has a negative impact on the maintain- ability of applications that employ this style. Several alter- natives to callbacks are therefore gaining traction within the web domain. For example, there are a number of frameworks that rely on automatically transforming sequential style code into the continuation-passing style. However, these frame- works often employ a conservative approach in which each function call is transformed into CPS. This conservative approach can sequentialise requests that could otherwise be run in parallel. So-called delimited continuations can remedy, but require special marks that have to be manually inserted in the code for marking the beginning and end of the continuation. In this paper we propose an alternative strategy in which we apply a delimited CPS transformation that operates on a Program Dependence Graph instead to find the limits of each continuation.We implement this strategy in JavaScript and demonstrate its applicability to various web programming scenarios. | ['Laure Philips', 'Joeri De Koster', 'Wolfgang De Meuter', 'Coen De Roover'] | Dependence-driven delimited CPS transformation for JavaScript | 917,245 |
Inspired by child development and brain research, we introduce a computational framework which integrates robotic active vision and reaching. Essential elements of this framework are sensorimotor mappings that link three different computational domains relating to visual data, gaze control, and reaching. The domain of gaze control is the central computational substrate that provides, first, a systematic visual search and, second, the transformation of visual data into coordinates for potential reach actions. In this respect, the representation of object locations emerges from the combination of sensorimotor mappings. The framework is tested in the form of two different architectures that perform visually guided reaching. Systematic experiments demonstrate how visual search influences reaching accuracy. The results of these experiments are discussed with respect to providing a reference architecture for developmental learning in humanoid robot systems. | ['M Hülse', 'Sebastian McBride', 'Mark Lee'] | Integration of Active Vision and Reaching From a Developmental Robotics Perspective | 51,452 |
Statistical ensembles of networks, i.e., probability spaces of all networks that are consistent with given aggregate statistics, have become instrumental in the analysis of complex networks. Their numerical and analytical study provides the foundation for the inference of topological patterns, the definition of network-analytic measures, as well as for model selection and statistical hypothesis testing. Contributing to the foundation of these data analysis techniques, in this Letter we introduce generalized hypergeometric ensembles, a broad class of analytically tractable statistical ensembles of finite, directed and weighted networks. This framework can be interpreted as a generalization of the classical configuration model, which is commonly used to randomly generate networks with a given degree sequence or distribution. Our generalization rests on the introduction of dyadic link propensities, which capture the degree-corrected tendencies of pairs of nodes to form edges between each other. Studying empirical and synthetic data, we show that our approach provides broad perspectives for model selection and statistical hypothesis testing in data on complex networks. | ['Giona Casiraghi', 'Vahan Nanumyan', 'Ingo Scholtes', 'Frank Schweitzer'] | Generalized Hypergeometric Ensembles: Statistical Hypothesis Testing in Complex Networks | 858,261 |
Differential Evolution is a simple, but effective approach for numerical optimization. Since the search efficiency of DE depends significantly on its control parameter settings, there has been much recent work on developing self-adaptive mechanisms for DE. We propose a new, parameter adaptation technique for DE which uses a historical memory of successful control parameter settings to guide the selection of future control parameter values. The proposed method is evaluated by comparison on 28 problems from the CEC2013 benchmark set, as well as CEC2005 benchmarks and the set of 13 classical benchmark problems. The experimental results show that a DE using our success-history based parameter adaptation method is competitive with the state-of-the-art DE algorithms. | ['Ryoji Tanabe', 'Alex Fukunaga'] | Success-history based parameter adaptation for Differential Evolution | 262,298 |
In the medical imaging field, we need fast deformable registration methods especially in intra-operative settings characterized by their time-critical applications. Image registration studies which are based on graphics processing units (GPUs) provide fast implementations. However, only a small number of these GPU-based studies concentrate on deformable registration. We implemented Demons, a widely used deformable image registration algorithm, on NVIDIApsilas Quadro FX 5600 GPU with the compute unified device architecture (CUDA) programming environment. Using our code, we registered 3D CT lung images of patients. Our results show that we achieved the fastest runtime among the available GPU-based Demons implementations. Additionally, regardless of the given dataset size, we provided a factor of 55 speedup over an optimized CPU-based implementation. Hence, this study addresses the need for on-line deformable registration methods in intra-operative settings by providing the fastest and most scalable Demons implementation available to date. In addition, it provides an implementation of a deformable registration algorithm on a GPU, an understudied type of registration in the general-purpose computation on graphics processors (GPGPU) community. | ['Pinar Muyan-Ozcelik', 'John D. Owens', 'Junyi Xia', 'Sanjiv S. Samant'] | Fast Deformable Registration on the GPU: A CUDA Implementation of Demons | 267,781 |
The paper presents a problem of reducing the influence of natural occlusion on face recognition accuracy. It is based on transfor- mation (two-dimensional Karhunen-Loeve Transform) of face parts into local subspaces calculated by means of two-dimensional Principal Com- ponent Analysis and two-dimensional Linear Discriminant Analysis. We use a sequence of operations consisting of face scale and orientation nor- malization and individual facial regions extraction. Independent recog- nitions are performed on extracted facial regions and their results are combined in order to perform a final classification. The experiments on images taken from publicly available datasets show that such a simple algorithm is able to successfully recognize faces without high compu- tational overhead, in contrast to more sophisticated methods presented recently. In comparison to typical, whole-face-based approach, developed method gives significantly better accuracy. | ['Paweł Forczmański', 'Piotr Łabȩdź'] | Improving the Recognition of Occluded Faces by Means of Two-dimensional Orthogonal Projection into Local Subspaces | 666,622 |
In this paper, a frequency-domain equalization architecture for the downlink over a space-time block coded MIMO-CDMA system is proposed. This architecture exploits transmit diversity by equally dividing the N T ( N T =2 Q , Q is a positive integer) transmit antennas into Q groups and implementing an Alamouti-like space-time block coding scheme for each group. At the receiver, the single-carrier minimum- mean-square-error frequency-domain equalization is combined with the successive interference cancellation to combat various interferences resulting from frequency-selective fading channels under multi-antenna and multi-user conditions. It is shown that our proposed architecture significantly outperforms the conventional MIMO-CDMA structures. | ['Zhifeng Ruan', 'Qixing Wang', 'Baojin Li', 'Yongyu Chang', 'Dacheng Yang'] | A Frequency-Domain Equalization Architecture for Space-Time Block Coded MIMO-CDMA Systems | 280,927 |
This paper proposes an econometric model that explains the efficiency of electric power plants between the years 1985 and 2005, and also builds a network among the different states based on fuel trade during the period 1990 to 2005 to evaluate the dynamic relationship among efficiency, technology adoption and the following variables: the corporate decision to acquire emission abatement technology, prices and quantities of low and high sulfur coal, SO_2 allowances prices, and the operation and maintenance cost of abating technology. This paper concludes that firms respond to the imposition of pollution control regulations such as the Clean Air Act (CAA) by selecting an efficient strategy that simultaneously control emissions and minimize costs. Efficiency increases when more sub-bituminous coal is used because of its lower level of emissions in comparison to bituminous coal and when the electricity sector is deregulated. An important factor that affects the use of coal is its cost. So, when the cost of sub-bituminous coal increases, we expect that less sub-bituminous coal is used and as a result both technical and scale efficiency decrease. Likewise larger plants are less efficient because of their restrictions to switch technology quickly and minimize costs. Technical efficiency has an important positive impact on the dynamic of the coal trade among states considering that electricity generating plants located closer to the Powder River Basin seem to benefit more of their proximity to major sub-bituminous sources. | ['Bernardo Creamer', 'Germán G. Creamer'] | Efficiency and Trade Network Analysis of the Electricity Market: 1985-2005 | 922,708 |
Integrating agents into virtual worlds | ['Ralph Peters', 'Andreas Graeff', 'Christian Paul'] | Integrating agents into virtual worlds | 185,405 |