abstract
stringlengths 5
11.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 4
367
| __index_level_0__
int64 0
1,000k
|
---|---|---|---|
The paper addresses the synergy between operations, technology management and human resource management by way of a study of operational innovation in firms within the Cypriot clothing manufacturing sector. Three case studies of change involving the automation of manufacturing are analysed with reference to the notion of a "coping cycle". In each case, the firm in question experienced difficulties with the implementation of operational innovations which made change problematic to sustain. Key factors reside in the general nature and historical context of employee relations and in the tactics employed to implement change. These have implications for the effective management of change and these are discussed within the paper. More broadly, the paper identifies human resource issues as falling within the proper scope of operations and technology management research. There remains a need to temper the traditional "hardware" focus of studies of operational/technological innovation with the ... | ['Audley Genus', 'Maria Kaplani'] | Managing operations with people and technology | 177,707 |
Materialized View Construction Using Linearizable Nonlinear Regression. | ['Soumya Sen', 'Partha Ghosh', 'Agostino Cortesi'] | Materialized View Construction Using Linearizable Nonlinear Regression. | 976,308 |
In this paper, firstly, processes of classical inference are reviewed as granular reasoning from a point of view of reconstructing Kripke-style models with granularity. The essential point of the reconstruction is that some possible worlds are amalgamated to generate granules of worlds and vice versa. It is also called zoom reasoning systems. Then, the idea is applied for fuzzy reasoning processes by considering fuzzily granularized possible worlds. There linguistic truth values with linguistic hedges can be naturally introduced. | ['Tetsuya Murai', 'Yasuo Kudo', 'Van-Nam Huynh', 'Akira Tanaka', 'Mineichi Kudo'] | A note on fuzzy granular reasoning | 528,419 |
We provide simple and fast polynomial-time approximation schemes (PTASs) for several variants of the max-sum diversification problem which, in its most basic form, is as follows: given n points p 1 , . . . , p n ∈ ℝ q and an integer k , select k points such that the average Euclidean distance between these points is maximized. This problem is commonly applied in web search and information retrieval in order to select a diverse set of representative points from the input. In this context, it has recently received a lot of attention. We present new techniques to analyze natural local-search algorithms. This leads to a (1 − O (1/ k ))-approximation for distances of negative type, even subject to a general matroid constraint of rank k , in time O ( nk 2 log k ), when assuming that distance evaluations and calls to the independence oracle are constant time. Negative-type distances include as special cases Euclidean and Manhattan distances, among other natural distances. Our result easily transforms into a PTAS. It improves on the only previously known PTAS for this setting, which relies on convex optimization techniques in an n -dimensional space and is impractical for large data sets. In contrast, our procedure has an (optimal) linear dependence on n. Using generalized exchange properties of matroid intersection, we show that a PTAS can be obtained for matroid-intersection constraints as well. Moreover, our techniques, being based on local search, are conceptually simple and allow for various extensions. In particular, we get asymptotically optimal O (1)-approximations when combining the classic dispersion function with a monotone submodular objective, which is a very common class of functions to measure diversity and relevance. This result leverages recent advances on local-search techniques based on proxy functions to obtain optimal approximations for monotone submodular function maximization subject to a matroid constraint. | ['Alfonso Cevallos', 'Friedrich Eisenbrand', 'Rico Zenklusen'] | Local search for max-sum diversification | 865,831 |
The study of static friction in control engineering is the subject of many researches due to its impact on degradation of performance of the control loops. Mathematical model of systems with static friction is not straight forward. Precise and proper model of this phenomenon is a key factor in model-based control to mitigate its effect. By increasing number of smart valve in industry, demand for identification of such valves is rising. In these valves, identification of process is limited to control signal (OP) and valve position (MV). By taking advantage of Hammerstein approach, identification is divided in two parts, linear dynamic part and nonlinear static part. In this paper, adaptive neuro-fuzzy inference system (ANFIS) is used for identification of nonlinear static part of the plant. The linear dynamic part can be identified using linear identification methods. Results reveal that ANFIS which integrates both neural networks and fuzzy logic principles and has potential to capture the benefits of both in a single framework can capture well the key model of the systems with smart valves involved in static friction. | ['M. A. Daneshwar', 'Norlaili Mohd Noh'] | Adaptive neuro-fuzzy inference system identification model for smart control valves with static friction | 937,355 |
Abstract#R##N#Management of chronic illness presents a challenge to patients. Asthma is a common chronic disease affecting millions of Americans each year. Particularly, maternal asthma puts pregnant women at high risk for many complications and requires special day-to-day management. Tailored materials are shown to be an effective means of communication between patients and physicians. To improve asthma management in this high-risk group, we developed a web-based tool to facilitate the creation and dissemination of tailored materials for asthmatic pregnant women. | ['Eun-Young Kim', 'Sandra Kogan', 'Cristiano L', 'Barrett T. Kitch', 'Qing Zeng'] | Facilitate the Delivery of Tailored Materials to Asthmatic Pregnant Women. | 554,542 |
In non-trivial software development projects planning and allocation of resources is an important and difficult task. Estimation of work time to fix a bug is commonly used to support this process. This research explores the viability of using data mining tools to predict the time to fix a bug given only the basic information known at the beginning of a bug's lifetime. To address this question, a historical portion of the Eclipse Bugzilla database is used for modeling and predicting bug lifetimes. A bug history transformation process is described and several data mining models are built and tested. Interesting behaviours derived from the models are documented. The models can correctly predict up to 34.9% of the bugs into a discretized log scaled lifetime class. | ['Lucas D. Panjer'] | Predicting Eclipse Bug Lifetimes | 17,620 |
We present a new fault simulation algorithm for realistic break faults in the p-networks and n-networks of static CMOS cells. We show that Miller effects can invalidate a test just as charge sharing can, and we present a new charge-based approach that efficiently and accurately predicts the worst case effects of Miller capacitances and charge sharing together. Results on running our fault simulator on ISCAS85 benchmark circuits are provided. | ['Haluk Konuk', 'F. Joel Ferguson', 'Tracy Larrabee'] | Accurate and Efficient Fault Simulation of Realistic CMOS Network Breaks | 345,413 |
Expander Construction in VNC 1 . | ['Samuel R. Buss', 'Valentine Kabanets', 'Antonina Kolokolova', 'Michal Koucky'] | Expander Construction in VNC 1 . | 978,767 |
With the ever increasing availability of the Internet and electronic media rich in graphical and pictorial information - for communication, commerce, entertainment, art, education - it has been hard for the visually impaired community to keep up. We propose a non-invasive system that can be used to convey graphical and pictorial information via touch and hearing. The main idea is that the user actively explores a two-dimensional layout consisting of one or more objects on a touch screen with the finger while listening to auditory feedback. We have demonstrated the efficacy of the proposed approach in a range of tasks, from basic shape identification to perceiving a scene with several objects. The proposed approach is also expected to contribute to research in virtual reality, immersive environments, and medicine. | ['Pubudu Madhawa Silva', 'Thrasyvoulos N. Pappas', 'Joshua Atkins', 'James E. West'] | Perceiving graphical and pictorial information via touch and hearing | 376,143 |
Although FPGAs are a cost-efficient alternative for both ASICs and general purpose processors, they still result in designs which are more than an order of magnitude more costly and slower than their equivalents implemented in dedicated logic. This efficiency gap makes FPGAs less suitable for high-volume cost-sensitive applications (e.g. embedded systems).We show that the intrinsic cost of traditional general-purpose FPGAs can be reduced if they are designed to target an application domain or a class of applications only. We propose a method of the application-domain characterization and apply it to characterize DSP. A novel FPGA logic block architecture derived based on such an analysis, and which exploits properties of target applications, is presented. Its key feature is the 'mixed-level granularity' being a trade-off between fine and coarse granularity required for the implementation of datapath and random logic functions, respectively. This leads to a factor of four improvement in the LUT memory size compared to commercial FPGAs, and, assuming a standard-cell implementation, a 1.6-2.8 lower datapath mapping cost. A modified mixed-grain architecture with the ALU-like functionality reduces the LUT memory size by a factor of 16 compared to commercial FPGAs, and mapped onto standard cells has a 1.9-3.3 times higher datapath mapping efficiency. For these reasons, the proposed FPGA architectures may be an interesting alternative to the traditional general-purpose FPGA devices, especially if characteristics of a target application domain are known a priority. | ['Katarzyna Leijten-Nowak', 'Jef L. van Meerbergen'] | An FPGA architecture with enhanced datapath functionality | 504,488 |
Corporate governance has become one of the most prominent topics for management scholars, top executives, and regulators alike over the last couple of decades. Originally a domain of economics and finance (as well as law), the theme has spread to other areas such as strategic management and organization theory in recent years. This paper will first give a brief overview on major developments in the field of corporate governance. These developments encompass, on the one hand, the extension of the classical focus on formal systems and structures to perspectives that address behavioral as well as process issues. On the other hand, the terrain has been broadened from its traditional narrow interest in the principal agent problem between shareholders and management to the more comprehensive stakeholder approach of corporate governance. Building on these developments, this paper will subsequently elaborate on a further extension of the topic by emphasizing the concept of stakeholder opportunism. The classical principal agent problem results from possible opportunistic behavior of the management, which compromises the interests of the shareholders. However, as the notion of stakeholder opportunism points out, not only the management of a company can exercise opportunism; rather, all stakeholders of a company can (and will to some extent) have options to behave opportunistically and at the same time bear the risk of being victims of the opportunism of other stakeholders. This paper develops a conceptual framework for analyzing the determinants and dynamics of the various stakeholders' opportunism options and risks as well as of the actual opportunistic behavior of stakeholders. Employing this framework, implications of the notion of stakeholder opportunism for managers and regulators are discussed, and perspectives for further research are identified. | ['Axel v. Werder'] | Corporate Governance and Stakeholder Opportunism | 386,386 |
Real-time Ethernet (RTE) is widely recognized for its potential to provide a unified communication backbone for next-generation heterogeneous distributed systems. However, most of the existing research in RTE technologies has traditionally focused on formal models and theoretical analyzes of timing properties, usually omitting the associated implementation challenges for testing them in practice. This gap between theory and practice prevents experimental validation of the claimed properties, which in turn hinders the pace of innovation and adoption of the technology in industrial settings. This paper aims at narrowing the theory-practice gap by characterizing a comprehensive open-source RTE framework that explores emerging challenges in real-time networking, including the provision of ultra-low latency and jitter, dynamic bandwidth management, and segmentation within large networks. This work integrates research on formal abstractions for dynamic time-division multiple access arbitration and technological insights from modern hardware infrastructure, and uses a representative distributed video processing application to provide reproducible evidence of the achieved properties in multihop Ethernet settings. By leveraging readily available technology and an open-source design, the proposed framework facilitates further exploration and experimental validation of properties that are beyond the scope of current commercial technologies, encouraging evidence-based discussions to accelerate development and adoption of new standards for next-generation industrial networks. | ['Gonzalo Carvajal', 'Luis Araneda', 'Alejandro Wolf', 'Miguel Figueroa', 'Sebastian Fischmeister'] | Integrating Dynamic-TDMA Communication Channels into COTS Ethernet Networks | 699,060 |
This paper concerns the issue of a new robust exponential stability for uncertainties complex neutral systems with mixed time-varying delays. In terms of a Lyapunov function and liner matrix inequality (LMI), the paper presented some new stability conditions with time-varying delay dependence. Illustrative example is given to demonstrate the effectiveness and less couservativeness of our method. | ['Jiqing Qiu', 'Yi Li', 'Long Zhao', 'Zirui Xing'] | New robust exponential stability of complex neutral system with mixed time-varying delays and nonlinear perturbations | 498,903 |
Traditional astronomy has focused on properties of the steady-state universe. Recent discoveries of strong, isolated radio pulses have, however, invigorated interest in transient phenomena. These radio transient events are rare, necessitating long observing times to give reasonable statistics. The National Aeronautics and Space Administration/Jet Propulsion Laboratory (NASA/JPL) Deep Space Network (DSN) tracks spacecraft continuously with several large antennas having low system noise temperature. The DSN also returns substantial predetection bandwidth from the antennas (400 MHz at X-band), currently processing only a fraction of that band for spacecraft tracking. This unused wideband capability is ideal for study of the radio transient sky. Here we describe and show initial performance results of a prototype receiver to search for such transients. This prototype is implemented as a firmware change in an operational DSN tracking receiver and can thus run in parallel with operational spacecraft tracks using existing spare receiver hardware. An operational version of this system could be deployed throughout the DSN to acquire data over extended periods and substantially improve the statistics of rare radio transient events. | ['Chau M. Buu', 'Fredrick A. Jenet', 'J. W. Armstrong', 'Sami W. Asmar', 'Mario Beroiz', 'T. Cheng', "Jeremy O'Dea"] | A Prototype Radio Transient Survey Instrument for Piggyback Deep Space Network Tracking | 46,554 |
The performance of parallel and distributed applications is highly dependent on the characteristics of the execution environment. In such environments, the network topology and characteristics tell how fast data can be transmitted and placed in the resources. These are key phenomena to understand the behavior of such applications and possibly improve it. Unfortunately few visualization available to the analyst are capable of accounting for such phenomena. In this paper, we propose an interactive topology-based visualization technique based on data aggregation that enables to correlate network characteristics, such as bandwidth and topology, with application performance traces. We show that such kind of visualization enables to explore and understand non trivial behavior that are impossible to grasp with classical visualization techniques. We also show that the combination of multi-scale aggregation and dynamic graph layout allows our visualization technique to scale seamlessly to large distributed systems. These results are validated through a detailed analysis of a high performance computing scenario and of a grid computing scenario. | ['Lucas Mello Schnorr', 'Arnaud Legrand', 'Jean-Marc Vincent'] | Interactive analysis of large distributed systems with scalable topology-based visualization | 302,468 |
The utility of including loops in plans has been long recognized by the planning community. Loops in a plan help increase both its applicability and the compactness of its representation. However, progress in finding such plans has been limited largely due to lack of methods for reasoning about the correctness and safety properties of loops of actions. We present novel algorithms for determining the applicability and progress made by a general class of loops of actions. These methods can be used for directing the search for plans with loops towards greater applicability while guaranteeing termination, as well as in post-processing of computed plans to precisely characterize their applicability. Experimental results demonstrate the efficiency of these algorithms. We also discuss the factors which can make the problem of determining applicability conditions for plans with loops incomputable. | ['Siddharth Srivastava', 'Neil Immerman', 'Shlomo Zilberstein'] | Applicability conditions for plans with loops: Computability results and algorithms | 245,335 |
Individual distinguishing pheromone in a multi-robot system for a Balanced Partitioned Surveillance task | ['Rodrigo Calvo', 'Ademir Aparecido Constantino', 'Mauricio Figueiredo'] | Individual distinguishing pheromone in a multi-robot system for a Balanced Partitioned Surveillance task | 942,473 |
In this paper, we consider a generalized multivariate regression problem where the responses are some functions of linear transformations of predictors. We assume that these functions are strictly monotonic, but their form and parameters are unknown. We propose a semi-parametric estimator based on the ordering of the responses which is invariant to the functional form of the transformation function as long as it is strictly monotonic. We prove that our estimator, which maximizes the rank similarity between responses and linear transformations of predictors, is a consistent estimator of the true coefficient matrix. We also identify the rate of convergence and show that the squared estimation error decays with a rate of o(1/n). We then propose a greedy algorithm to maximize the highly non-smooth objective function of our model and examine its performance through extensive simulations. Finally, we compare our algorithm with traditional multivariate regression algorithms over synthetic and real data. | ['Milad Kharratzadeh', 'Mark Coates'] | Semi-parametric Order-based Generalized Multivariate Regression | 640,098 |
This paper proposes a new approach for making simulations realistic. This approach is based on the principle of "trace driven simulation", i.e. using the results of the actual traffic traces analysis in order to reproduce the same experimental conditions in simulation. The main principle of the approach proposed in this paper deals with making simulation traffic sources replay under certain conditions - the actual traffic traces grabbed on actual networks. This paper describes the implementation of this approach in the NS simulator, and evaluates it by comparing the characteristics of the traces obtained with our replay approach with original data traces. The parameters that are considered for making the comparison are the usual traffic parameters as throughput, packet rate, etc., but also everything that is related to traffic dynamics, i.e. the second order statistical moments as autocorrelation of traffic or long range dependence. | ['Philippe Owezarski', 'Nicolas Larrieu'] | A trace based method for realistic simulation | 151,974 |
In this paper, a systematic controller design approach is proposed to guarantee both closed-loop stability and desired performance of the overall system by effectively combining genetic algorithms (GAs) with Lyapunov's direct-controller design method. The effectiveness of the approach is shown by using a simple and efficient decimal GA optimization procedure to tune and optimize the performance of a Lyapunov-based robust controller for a single-link flexible robot. The feedback gains of the controller are tuned by the GA optimization process to achieve good results for tip motion control of the single-link flexible robot based on some suitable fitness functions. The paper includes results of simulation experiments demonstrating the effectiveness of the proposed genetic algorithm approach. | ['Shuzhi Sam Ge', 'Tong Heng Lee', 'G. Zhu'] | Genetic algorithm tuning of Lyapunov-based controllers: an application to a single-link flexible robot system | 165,558 |
CAN ONTOLOGIES BE SUFFICIENT SOLUTION TO REQUIREMENT ENGINEERING PROBLEM | ['Richa Sharma', 'Kanad K. Biswas'] | CAN ONTOLOGIES BE SUFFICIENT SOLUTION TO REQUIREMENT ENGINEERING PROBLEM | 799,331 |
In this paper, we address the problem for predicting cQA answer quality as a classification task. We propose a multimodal deep belief nets based approach that operates in two stages: First, the joint representation is learned by taking both textual and non-textual features into a deep learning network. Then, the joint representation learned by the network is used as input features for a linear classifier. Extensive experimental results conducted on two cQA datasets demonstrate the effectiveness of our proposed approach. | ['Haifeng Hu', 'Bingquan Liu', 'Baoxun Wang', 'Ming Liu', 'Xiaolong Wang'] | Multimodal DBN for Predicting High-Quality Answers in cQA portals | 615,083 |
Database search for images containing icons with specific mutual spatial relationships can be facilitated by an appropriately structured index. For the case of images containing subsets each of which consist of three icons, the one-to-one correspondence between (distinct) point triples and triangles allows the use of such triangle attributes as position, size, orientation, and "shape" in constructing a point-based index, in which each triangle maps to a single point in a resulting hyperdimensional index space. Size (based on the triangle perimeter) can be represented by a single linear dimension. The abstract "shape" of a triangle induces a space that is inherently two-dimensional, and a number of alternative definitions of a basis for this space are examined. Within a plane, orientation reduces to rotation, and (after assignment of a reference direction for the triangle) can be represented by a single, spatially closed dimension. However, assignment of a reference direction for triangles possessing a k-fold rotational symmetry presents a significant challenge. Methods are described for characterizing shape and orientation of triangles, and for mapping these attributes onto a set of linear axes to form a combined index. The shape attribute is independent of size, orientation, and position, and the characterization of shape and orientation is stable with respect to small variations in the indexed triangles. | ['Charles Ben Cranston', 'Hanan Samet'] | Indexing Point Triples Via Triangle Geometry | 55,222 |
We present efficient algorithms for dealing with the problem of missing inputs (incomplete feature vectors) during training and recall. Our approach is based on the approximation of the input data distribution using Parzen windows. For recall, we obtain closed form solutions for arbitrary feedforward networks. For training, we show how the backpropagation step for an incomplete pattern can be approximated by a weighted averaged backpropagation step. The complexity of the solutions for training and recall is independent of the number of missing features. We verify our theoretical results using one classification and one regression problem. | ['Volker Tresp', 'Ralph Neuneier', 'Subutai Ahmad'] | Efficient Methods for Dealing with Missing Data in Supervised Learning | 339,717 |
It is often possible to save storage space in a computer by storing only the differences among data items rather than the entire items. For example, suppose we have two records A and B. We should store all of A, then for B store a pointer to A and the differences between A and B. If A and B are similar, there will be few differences and storage space can be saved. | ['Kang', 'Lee', 'Chin-Liang Chang', 'Shi-Kuo Chang'] | Storage Reduction Through Minimal Spanning Trees and Spanning Forests | 150,176 |
In recent years, we have seen the emergence of multi-GS/s medium-to-high-resolution ADCs. Presently, SAR ADCs dominate low-speed applications and time-interleaved SARs are becoming increasingly popular for high-speed ADCs [1,2]. However the SAR architecture faces two key problems in simultaneously achieving multi-GS/s sample rates and high resolution: (1) the fundamental trade-off of comparator noise and speed is limiting the speed of single-channel SARs, and (2) highly time-interleaved ADCs introduce complex lane-to-lane mismatches that are difficult to calibrate with high accuracy. Therefore, pipelined [3] and pipelined-SAR [4] remain the most common architectural choices for high-speed high-resolution ADCs. In this work, a pipelined ADC achieves 4GS/s sample rate, using a 4-step capacitor and amplifier-sharing front-end MDAC architecture with 4-way sampling to reduce noise, distortion and power, while overcoming common issues for SHA-less ADCs. | ['Jiangfeng Wu', 'Acer Chou', 'Tianwei Li', 'Rong Wu', 'Tao Wang', 'Giuseppe Cusmai', 'Sha-Ting Lin', 'Cheng-Hsun Yang', 'Gregory Unruh', 'Sunny Raj Dommaraju', 'Mo M. Zhang', 'Po Tang Yang', 'Wei-Ting Lin', 'X. Chen', 'Dongsoo Koh', 'Qingqi Dou', 'H. Mohan Geddada', 'Juo-Jung Hung', 'Massimo Brandolini', 'Young Shin', 'Hung-Sen Huang', 'Chun-Ying Chen', 'Ardie Venes'] | 27.6 A 4GS/s 13b pipelined ADC with capacitor and amplifier sharing in 16nm CMOS | 657,578 |
High workload and throughput requirements of image and video processing applications can be sustained on a many-core system. However, inefficient parallelization and processing assignments to the cores result in reduced system efficiency. Eliminating them necessitates a power-efficient and balanced workload distribution among the cores. This paper addresses these challenges by introducing a novel workload-balancing and adaptation scheme. Our scheme accounts for the application characteristics and the underlying hardware, and the variation of load. Automatic selection of the number of cores and distribution of workload to each core depends on the throughput requirements, available number of cores, allowable voltage–frequency settings, and data content. Moreover, runtime derivation and fine-tuning of the workload-dependent frequency estimation models of each core are achieved using a closed-loop feedback mechanism. Furthermore, we propose an optional feedback control-based workload-tuning scheme that can further reduce the total power consumption. A case study of an advanced multithreaded video application demonstrates up to $\sim 42$ % power savings (average $\sim 39$ %) with negligible video quality degradation, using our proposed power-efficient workload-balancing and tuning. | ['Muhammad Usman Karim Khan', 'Muhammad Shafique', 'Jörg Henkel'] | Power-Efficient Workload Balancing for Video Applications | 717,763 |
A survey comparing methods for constructing smooth parametric surfaces to interpolate vertices and normal vectors of a triangulated polyhedron is presented. Particular attention is paid to the quality or fairness of the fit, measured by examining how curvature is distributed over the surface. The methods surveyed all generate surfaces composed of one or more surface patches per triangular facet of the input polyhedron. The approaches require an analysis of the number of constraints versus the number of degrees of freedom. Constraints include not only the interpolation conditions, but also continuity conditions imposed where adjacent surface patches abutt. Once the constraints are satisfied, there are generally surplus degrees of freedom. It is shown that the setting of these remaining free parameters can dramatically affect the shape of the surface, so the various methods are classified according to how they assign values to the free parameters. > | ['Michael Lounsbery', 'Stephen Mann', 'Tony DeRose'] | Parametric surface interpolation | 142,455 |
This paper studies coverage probabilities of randomly deployed radio nodes in single-destination networks. Such networks are typically operated for data aggregation in machine-to-machine (M2M) communication scenarios. In large-scale deployments, diverse environments usually result in heterogeneous propagation channels. We address the question to which extent an increased medium access can compensate for impaired propagation conditions. Based on superimposed Poisson Point Processes and a strongest interferer approximation, we propose a closed-form solution for the probability of a successful packet transmission under heterogeneous channel conditions. The applicability of the framework is demonstrated for an exemplary network configuration with a given latency constraint. Jointly optimising the medium access probabilities can either increase the network coverage area (e.g. by factor two) or reduce the overall networks power consumption by up to more than 30% in sufficiently dense deployments. | ['Hendrik Lieske', 'Joerg Robert', 'Albert Heuberger'] | Improving Medium Access in Single-Destination Networks with Heterogeneous Propagation Channels | 653,534 |
In this paper, we propose a static (compile-time) scheduling extension that considers reconfiguration and task execution together when scheduling tasks on reconfigurable hardware, designated as Mutually Exclusive Groups (-MEG), that can be used to extend any static list scheduler. In simulation, using -MEG generates higher quality schedules than those generated by the hardware-software co-scheduler proposed by Mei, et al. [6] and using a single configuration with the base scheduler. Additionally, we propose a dynamic (run-time), fault tolerant scheduler targeted to reconfigurable hardware. We present promising preliminary results using the proposed fault-tolerant dynamic scheduler, showing that application performance gracefully degrades when shrinking the available processing resources. | ['Justin Teller', 'Füsun Özgüner'] | Scheduling tasks on reconfigurable hardware with a list scheduler | 385,308 |
We analyze the performance of an adaptive chaotic synchronization system under information constraints assuming that some system parameters are unknown and only the system output is measured. Such a problem was studied previously in the absence of information constraints based on an adaptive observer scheme, allowing for its use in message transmission systems. We provide analytical bounds for the closed-loop system performance (asymptotic synchronization error) and conduct a numerical case study for a typical chaotic system, namely the Chua circuit, in the presence of information constraints. It is shown that the time-varying quantizer with one-step memory provides a reasonable approximation of the minimum transmission rate for adaptive state estimation. | ['Alexander L. Fradkov', 'Boris Andrievsky', 'Robin J. Evans'] | Adaptive Observer-Based Synchronization of Chaotic Systems With First-Order Coder in the Presence of Information Constraints | 329,860 |
We present an efficient procedure for factorising probabilistic potentials represented as probability trees. This new procedure is able to detect some regularities that cannot be captured by existing methods. In cases where an exact decomposition is not achievable, we propose a heuristic way to carry out approximate factorisations guided by a parameter called factorisation degree, which is fast to compute. We show how this parameter can be used to control the tradeoff between complexity and accuracy in approximate inference algorithms for Bayesian networks. | ['Andrés Cano', 'Manuel Gómez-Olmedo', 'Cora B. Pérez-Ariza', 'Antonio Salmerón'] | Fast factorisation of probabilistic potentials and its application to approximate inference in Bayesian networks | 118,146 |
Novel Lightwave-Interferometric Phase Detection for Phase Stabilization of Two-Tone Coherent Millimeter-Wave/Microwave Carrier Generation | ['Shota Takeuchi', 'Kazuki Sakuma', 'Kazutoshi Kato', 'Yasuyuki Yoshimizu', 'Yu Yasuda', 'Shintaro Hisatake', 'Tadao Nagatsuma'] | Novel Lightwave-Interferometric Phase Detection for Phase Stabilization of Two-Tone Coherent Millimeter-Wave/Microwave Carrier Generation | 871,427 |
Previous research on disasters and crises has shown that, in some cases, citizens from affected communities collectively participate in emergency response. Social media technologies now provide local and even non-local citizens an additional means for collective response. By using social media technologies, including Twitter, distributed citizens can generate and disseminate their own crisis-related information to a wide audience bypassing official communications. Researchers have found that citizens use Twitter for information production, broadcasting, brokering, and organization during violent crises [1–2] and natural disasters [3–6]. Although the information behaviors have been analyzed, ethical considerations have yet to be addressed. | ['Thomas Heverin'] | Ethical concerns of twitter use for collective crisis response | 128,564 |
We present a non-linear model transformation for adapting Gaussian mixture HMMs using both static and dynamic MFCC observation vectors to additive noise and constant system tilt. This transformation depends upon a few compensation coefficients which can be estimated from channel distorted speech via maximum-likelihood stochastic matching. Experimental results validate the effectiveness of the adaptation. We also provide an adaptation strategy which can result in improved performance at reduced computational cost compared with a straightforward implementation of stochastic matching. | ['Shuen Kong Wong', 'Bertram E. Shi'] | Channel and noise adaptation via HMM mixture mean transform and stochastic matching | 117,299 |
This work considers the relay selection and resource allocation problem (i.e., link scheduling, and rate allocation) for multi-source, multi-relay dual-hop wireless networks. The relays employ buffers to store the received data from the sources for future transmissions. End-to-end (E2E) delay of each traffic flow originated from a source or a relay is constrained in terms of maximum allowable delay-outage probability. To solve this problem, we first study the resource allocation problem to maximize the constant supportable arrival rate of a non-prioritized source under minimum rate requirements of the prioritized sources and relays for a given relay selection solution. Then, the optimal relay selection can be determined to support the largest rate of the non-prioritized source among all possible relay selection solutions. We derive the resource allocation solutions using asymptotic delay analysis and convex optimization techniques. We also develop an online allocation algorithm which does not require the knowledge of the fading statistics by using stochastic approximation theory. Numerical results are presented to demonstrate the usefulness of the proposed resource allocation design for relay selection under different delay and rate constraint regimes. | ['Khoa Tran Phan', 'Tho Le-Ngoc', 'Long Bao Le'] | Relay Selection, Link Scheduling, and Rate Allocation in Dual-Hop Buffer-Aided Networks with Statistical Delay Constraints | 660,787 |
Most commercial digital cameras use color filter arrays to sample red, green, and blue colors according to a specific pattern. At the location of each pixel only one color sample is taken, and the values of the other colors must be interpolated using neighboring samples. This color plane interpolation is known as demosaicing; it is one of the important tasks in a digital camera pipeline. If demosaicing is not performed appropriately, images suffer from highly visible color artifacts. In this paper we present a new demosaicing technique that uses inter-channel correlation effectively in an alternating-projections scheme. We have compared this technique with six state-of-the-art demosaicing techniques, and it outperforms all of them, both visually and in terms of mean square error. | ['Bahadir K. Gunturk', 'Yucel Altunbasak', 'Russell M. Mersereau'] | Color plane interpolation using alternating projections | 510,974 |
Unlike user studies, inspection based methods are not widely researched in the area of Child Computer Interaction. This paper reports the findings of a study to empower teenagers to facilitate a heuristic evaluation with their peers acting as the expert evaluators. In total 20 teenagers participated in the study, with four of the teenagers acting as facilitators and the remainder as evaluators. The results showed that teenagers struggled to act in the role as facilitator, struggling to explain the heuristic evaluation process and keep the evaluators on track. The evaluators found very few problems and became distracted from the evaluation opting to play on other features of the device rather than the game itself. Further research will be performed to modify the process in an attempt to eliminate these issues in order to improve the method for teenagers. | ['Obelem Akobo Wodike', 'Gavin Robert Sim', 'Matthew Horton'] | Empowering Teenagers to Perform a Heuristic Evaluation of a Game | 629,891 |
In this paper, we propose a motion planning method to escort a set of agents from one place to a goal in an environment with obstacles. The agents are distributed in a finite area, with a time-varying perimeter, in which we put multiple robots to patrol around it with a desired velocity. Our proposal is composed of two parts. The first one generates a plan to move and deform the perimeter smoothly, and as a result, we obtain a twice differentiable boundary function. The second part uses the boundary function to compute a trajectory for each robot, we obtain each resultant trajectory by first solving a differential equation. After receiving the boundary function, the robots do not need to communicate among themselves until they finish their trajectories. We validate our proposal with simulations and experiments with actual robots. | ['David Saldana', 'Reza Javanmard Alitappeh', 'Luciano C. A. Pimenta', 'Renato Assunção', 'Mario Fernando Montenegro Campos'] | Dynamic perimeter surveillance with a team of robots | 817,880 |
We consider the numerical solution of projected algebraic Riccati equations using Newton's method. Such equations arise, for instance, in model reduction of descriptor systems based on positive real and bounded real balanced truncation. We also discuss the computation of low-rank Cholesky factors of the solutions of projected Riccati equations. Numerical examples are given that demonstrate the properties of the proposed algorithms. | ['Peter Benner', 'Tatjana Stykel'] | Numerical Solution of Projected Algebraic Riccati Equations | 335,320 |
An analytical framework to assess the performance of a given diversity system in wireless environments characterized by their power dispersion profiles is established. In particular, we consider maximal-ratio diversity systems and proved the monotonicity of their matched-filter bound performance. Our results are general in that fading distributions are not required to compare and/or bound the performance of a given diversity system in different wireless environments. Hence, these results are valid for arbitrary fading distributions. | ['Moe Z. Win'] | On the monotonicity of matched-filter bounds for diversity combining receivers | 389,332 |
Many computing systems today are heterogeneous in that they consist of a mix of different types of processing units (e.g., CPUs, GPUs). Each of these processing units has different execution capabilities and energy consumption characteristics. Job mapping and scheduling play a crucial role in such systems as they strongly affect the overall system performance, energy consumption, peak power and peak temperature. Allocating resources (e.g., core scaling, threads allocation) is another challenge since different sets of resources exhibit different behavior in terms of performance and energy consumption. Many studies have been conducted on job scheduling with an eye on performance improvement. However, few of them takes into account both performance and energy. We thus propose our novel Performance, Energy and Thermal aware Resource Allocator and Scheduler (PETRAS) which combines job mapping, core scaling, and threads allocation into one scheduler. Since job mapping and scheduling are known to be NP-hard problems, we apply an evolutionary algorithm called a Genetic Algorithm (GA) to find an efficient job schedule in terms of execution time and energy consumption, under peak power and peak temperature constraints. Experiments conducted on an actual system equipped with a multicore CPU and a GPU show that PETRAS finds efficient schedules in terms of execution time and energy consumption. Compared to performance-based GA and other schedulers, on average, PETRAS scheduler can achieve up to a 4.7x of speedup and an energy saving of up to 195%. | ['Shouq Alsubaihi', 'Jean-Luc Gaudiot'] | PETRAS: Performance, Energy and Thermal Aware Resource Allocation and Scheduling for Heterogeneous Systems | 997,928 |
Background#R##N#Gene expression signatures in the mammalian brain hold the key to understanding neural development and neurological disease. Researchers have previously used voxelation in combination with microarrays for acquisition of genome-wide atlases of expression patterns in the mouse brain. On the other hand, some work has been performed on studying gene functions, without taking into account the location information of a gene's expression in a mouse brain. In this paper, we present an approach for identifying the relation between gene expression maps obtained by voxelation and gene functions. | ['Li An', 'Hongbo Xie', 'Mark H. Chin', 'Zoran Obradovic', 'Desmond J. Smith', 'Vasileios Megalooikonomou'] | Analysis of multiplex gene expression maps obtained by voxelation. | 201,245 |
The energy efficiency of cloud computing has recently attracted a great deal of attention. As a result of raised expectations, cloud providers such as Amazon and Microsoft have started to deploy a new IaaS service, a MapReduce-style virtual cluster, to process data-intensive workloads. Considering that the IaaS provider supports multiple pricing options, we study batch-oriented consolidation and online placement for reserved virtual machines (VMs) and on-demand VMs, respectively. For batch cases, we propose a DVFS-based heuristic TRP-FS to consolidate virtual clusters on physical servers to save energy while guarantee job SLAs. We prove the most efficient frequency that minimizes the energy consumption, and the upper bound of energy saving through DVFS techniques. More interestingly, this frequency only depends on the type of processor. FS can also be used in combination with other consolidation algorithms. For online cases, a time-balancing heuristic OTB is designed for on-demand placement, which can reduce the mode switching by means of balancing server duration and utilization. The experimental results both in simulation and using the Hadoop testbed show that our approach achieves greater energy savings than existing algorithms. | ['Fei Teng', 'L. G. Yu', 'Tianrui Li', 'Danting Deng', 'Frédéric Magoulès'] | Energy efficiency of VM consolidation in IaaS clouds | 833,272 |
To improve business process performances, this study develops a dynamic task assignment approach for minimizing the cycle time of business processes. In particular, considering the quantity of each resource as a new parameter, a formal business process model and a novel approach to estimating the mean cycle time of activities are presented for task assignment based on individual worklists, queuing theory and stochastic theory. Then the mathematical model of task assignment and its solution for minimizing the cycle time are proposed, and the dynamic task re-assignment policy is developed to further reduce the cycle time. The results of simulation experiments and practical applications show our approach has better validity and practical viability than other approaches. | ['Yi Xie', 'Chen-Fu Chien', 'Renzhong Tang'] | A dynamic task assignment approach based on individual worklists for minimizing the cycle time of business processes | 587,031 |
This paper presents a novel and notable swarm approach to evolve an optimal set of weights and architecture of a neural network for classification in data mining. In a distributed environment the proposed approach generates randomly multiple architectures competing with each other while fine-tuning their architectural loopholes to generate an optimum model with maximum classification accuracy. Aiming at better generalization ability, we analyze the use of particle swarm optimization (PSO) to evolve an optimal architecture with high classification accuracy. Experiments performed on benchmark datasets show that the performance of the proposed approach has good classification accuracy and generalization ability. Further, a comparative performance of the proposed model with other competing models is given to show its effectiveness in terms of classification accuracy. | ['Satchidananda Dehuri', 'Bijan Bihari Mishra', 'Sung-Bae Cho'] | A notable swarm approach to evolve neural network for classification in data mining | 76,072 |
Targeting Advertising Scenarios for e-Shops Surfers | ['Dalia Kriksciuniene', 'Virgilijus Sakalauskas'] | Targeting Advertising Scenarios for e-Shops Surfers | 998,293 |
Deep Belief Networks (DBNs) are a very competitive alternative to Gaussian mixture models for relating states of a hidden Markov model to frames of coefficients derived from the acoustic input. They are competitive for three reasons: DBNs can be fine-tuned as neural networks; DBNs have many non-linear hidden layers; and DBNs are generatively pre-trained. This paper illustrates how each of these three aspects contributes to the DBN's good recognition performance using both phone recognition performance on the TIMIT corpus and a dimensionally reduced visualization of the relationships between the feature vectors learned by the DBNs that preserves the similarity structure of the feature vectors at multiple scales. The same two methods are also used to investigate the most suitable type of input representation for a DBN. | ['Abdel-rahman Mohamed', 'Geoffrey E. Hinton', 'Gerald Penn'] | Understanding how Deep Belief Networks perform acoustic modelling | 543,252 |
Sparse impulse responses are encountered in many acoustic and wireless channels. Recently, a class of exponentiated gradient (EG) algorithms has been proposed. One of the algorithms belonging to this class, the so-called EG/spl plusmn/ algorithm, converges and tracks much better than the classical stochastic gradient, or LMS, algorithm for sparse impulse responses. We apply this technique to blind identification of a sparse SIMO system and develop the multichannel EG/spl plusmn/ algorithm. A simple experiment demonstrates its advantage in convergence compared to the MCLMS algorithm. | ['Jacob Benesty', 'Yiteng Huang', 'Jingdong Chen'] | An exponentiated gradient adaptive algorithm for blind identification of sparse SIMO systems | 157,709 |
The paper presents a series of experiments on speech utterance classification performed on the ATIS corpus. We compare the performance of n-gram classifiers with that of Naive Bayes and maximum entropy classifiers. The n-gram classifiers have the advantage that one can use a single pass system (concurrent speech recognition and classification) whereas for Naive Bayes or maximum entropy classification we use a two-stage system: speech recognition followed by classification. Substantial relative improvements (up to 55%) in classification accuracy can be obtained using discriminative. training methods that belong to the class of conditional maximum likelihood techniques. | ['Ciprian Chelba', 'Milind Mahajan', 'Alex Acero'] | Speech utterance classification | 531,469 |
Many applications demand high-precision navigation in urban environments. Two frequency real-time kinematic (RTK) Global Positioning System (GPS) receivers are too expensive for low-cost or consumer-grade projects. As single-frequency GPS receivers are getting less expensive and more capable, more people are utilizing single-frequency RTK GPS techniques to achieve high accuracy in such applications. However, compared with dual-frequency receivers, it is much more difficult to resolve the integer ambiguity vector using single-frequency phase measurements and therefore more difficult to achieve reliable high-precision navigation. This paper presents a real-time sliding-window estimator that tightly integrates differential GPS and an inertial measurement unit to achieve reliable high-precision navigation performance in GPS-challenged urban environments using low-cost single-frequency GPS receivers. Moreover, this paper proposes a novel method to utilize the phase measurements, without resolving the integer ambiguity vector. Experimental results demonstrate real-time position estimation performance at the decimeter level. Furthermore, the novel use of phase measurements improves the robustness of the estimator to pseudorange multipath error. | ['Sheng Zhao', 'Yiming Chen', 'Jay A. Farrell'] | High-Precision Vehicle Navigation in Urban Environments Using an MEM's IMU and Single-Frequency GPS Receiver | 706,419 |
Purpose – The purpose of this paper is to describe how an “experience framework” for an evidence-based information literacy educational intervention can be formulated. Design/methodology/approach – The experience framework is developed by applying the qualitative methodology phenomenography to the analysis of the variation in the experience of a phenomenon by a target group, making specific use of one of its data analysis methods, that pioneered by Gerlese Akerlind. A phenomenographic study’s descriptions of the limited but related experiences of the phenomenon, and the detail of context and complexity in experience achieved through the Akerlind data analysis technique, are essential to a framework’s structure and educationally valuable richness of detail. Findings – The “experience framework”, an example of which is set out in this paper, is formed from a detailed range of contexts, forms and levels of complexity of experience of a phenomenon, such as information literacy, in a group or profession. Group... | ['Marc Forster'] | Developing an “experience framework” for an evidence-based information literacy educational intervention | 710,294 |
Advances in computing power and the development of high-speed networks have enabled research on interactive and real-time multimedia computing. Nevertheless, the video server in MMDBMS suffers from the limitation of current magnetic disk technology for supporting large amounts of interactive requests simultaneously. We study the problem of data placement in MMDBMS consisting of multiple disks to provide natural retrieval of different portions of video concurrently. Our approach is to disperse video segments onto distributed disks in a restricted round-robin manner. The prime round-robin placement scheme provides uniform load-balance of disks at any rate. In addition, we have shown that the CM server with efficient placement provides appropriate retrieval operations during the execution of other operations. | ['Taeck-Geun Kwon', 'Sukho Lee'] | Data placement for continuous media in multimedia DBMS | 339,645 |
An inclusion rule for vantage point tree range query processing | ['Guohang Zeng', 'Qiaozhi Li', 'Huiming Jia', 'Xingliang Li', 'Yadi Cai', 'Rui Mao'] | An inclusion rule for vantage point tree range query processing | 731,751 |
In this paper we present a rover navigation dataset collected at a Mars/Moon analogue site on Devon Island, in the Canadian High Arctic. The dataset is split into two parts. The first part contains rover traverse data: stereo imagery, Sun vectors, inclinometer data, and ground-truth position information from a differential global positioning system (DGPS) collected over a 10-km traverse. The second part contains long-range localization data: 3D laser range scans, image panoramas, digital elevation models, and GPS data useful for global position estimation. All images are available in common formats and other data is presented in human-readable text files. To facilitate use of the data, Matlab parsing scripts are included. | ['Paul Timothy Furgale', 'Patrick J. F. Carle', 'John Enright', 'Timothy D. Barfoot'] | The Devon Island rover navigation dataset | 203,622 |
We model a problem about networks built from wireless devices using identifying and locating–dominating codes in unit disk graphs. It is known that minimizing the size of an identifying code is -complete even for bipartite graphs. First, we improve this result by showing that the problem remains -complete for bipartite planar unit disk graphs. Then, we address the question of the existence of an identifying code for random unit disk graphs. We derive the probability that there exists an identifying code as a function of the radius of the disks, and we find that for all interesting ranges of r this probability is bounded away from one. The results obtained are in sharp contrast to those concerning random graphs in the Erdős–Renyi model. Another well-studied class of codes is that of locating–dominating codes, which are less demanding than identifying codes. A locating–dominating code always exists, but minimizing its size is still -complete in general. We extend this result to our setting by showing that this question remains -complete for arbitrary planar unit disk graphs. Finally, we study the minimum size of such a code in random unit disk graphs, and we prove that with probability tending to one, it is of size (n/r)2/3+o(1) if r ≤ /2−ϵ is chosen such that nr2 → ∞, and of size n1+o(1) if nr2 ≪ lnn. | ['Tobias Müller', 'Jean-Sébastien Sereni'] | Identifying and locating–dominating codes in (random) geometric networks | 45,841 |
Real Time Strategy (RTS) games pose a series of challenges to players and AI Agents due to its dynamical, distributed and multi-objective fashion. In this paper, we propose and develop an Artificial Intelligence (AI) system that helps the player during the game, giving him tactical and strategical tips about the best actions to be taken according to the current game state with the objective of improving the player's performance. We describe the main features of the system, its implementation and perform experiments using a real game to evaluate its effectiveness. | ['Renato Luiz de Freitas Cunha', 'Luiz Chaimowicz'] | An Artificial Intelligence System to Help the Player of Real-Time Strategy Games | 915,973 |
The Missouri lottery, a profit-driven nonprofit organization, generates annual revenues of over $800 million by selling lottery tickets; 27.5 percent of the revenue goes to Missouri's public education programs. The lottery sales representatives (LSRs) play a central role in increasing sales by providing excellent customer service to ticket retailers throughout the state. Hence, LSRs must have equitable, balanced work schedules and efficient routes and navigation sequences. Our objective was to provide scheduling and routing policies that minimize LSRs' total travel distance while balancing their workloads and meeting visitation constraints. We modeled the problem as a periodic traveling-salesman problem and developed improvement algorithms specifically to solve this problem. The newly implemented schedules and routes decrease the LSRs' travel distance by 15 percent, improve visitation feasibility by 46 percent, increase the balance of routes by 63 percent, decrease overtime days by 32 percent, and indirectly increase the sales of lottery tickets by improving customer service. | ['Wooseung Jang', 'Huay H. Lim', 'Thomas J. Crowe', 'Gail Raskin', 'Thomas Perkins'] | The Missouri Lottery Optimizes Its Scheduling and Routing to Improve Efficiency and Balance | 254,287 |
We describe our vision, goals and plans for HARNESS, a distributed, reconfigurable and heterogeneous computing environment that supports dynamically adaptable parallel applications. HARNESS builds on the core concept of the personal virtual machine as an abstraction for distributed parallel programming, but fundamentally extends this idea, greatly enhancing dynamic capabilities. HARNESS is being designed to embrace dynamics at every level through a pluggable model that allows multiple distributed virtual machines (DVMs) to merge, split and interact with each other. It provides mechanisms for new and legacy applications to collaborate with each other using the HARNESS infrastructure, and defines and implements new plug-in interfaces and modules so that applications can dynamically customize their virtual environment. HARNESS fits well within the larger picture of computational grids as a dynamic mechanism to hide the heterogeneity and complexity of the nationally distributed infrastructure. HARNESS DVMs allow programmers and users to construct personal subsets of an existing computational grid and treat them as unified network computers, providing a familiar and comfortable environment that provides easy-to-understand scoping. | ['Jack Dongarra', 'Graham E. Fagg', 'Al Geist', 'James Arthur Kohl', 'Philip M. Papadopoulos', 'Stephen L. Scott', 'Vaidy S. Sunderam', 'M Magliardi'] | HARNESS: Heterogeneous Adaptable Reconfigurable NEtworked SystemS | 181,869 |
This paper proposes new building blocks for the lattice structure of oversampled linear-phase perfect reconstruction filter banks (OLPPRFBs). The structure is an extended version of higher-order feasible building blocks for critically-sampled LPPRFBs. It uses fewer number of building blocks and design parameters than those of traditional OLPPRFBs, whereas the frequency characteristic of the new OLPPRFB is comparable to that of traditional one. | ['Yuichi Tanaka', 'Masaaki Ikehara', 'Truong Q. Nguyen'] | Oversampled linear-phase perfect reconstruction filter banks with higher-order feasible building blocks: Structure and parameterization | 387,173 |
Tackling the decision-making problem faced by a prosumer (i.e., a producer that is simultaneously a consumer) when selling and buying energy in the emerging smart electricity grid, is of utmost importance for the economic profitability of such a business entity. In this paper, we model, for the first time, this problem as a factored Markov Decision Process. By so doing, we are able to represent the problem compactly, and provide an exact optimal solution via dynamic programming - notwithstanding its large size. Our model successfully captures the main aspects of the business decisions of a prosumer corresponding to a community microgrid of any size. Moreover, it includes appropriate sub-models for prosumer production and consumption prediction. Experimental simulations verify the effectiveness of our approach; and show that our exact value iteration solution matches that of a state-of-the-art method for stochastic planning in very large environments, while outperforming it in terms of computation time. | ['Angelos Angelidakis', 'Georgios Chalkiadakis'] | Factored MDPS for Optimal Prosumer Decision-Making | 578,145 |
Based on the shift-splitting technique, a class of generalized shift-splitting preconditioners are proposed for both nonsingular and singular generalized saddle point problems. The generalized shift-splitting preconditioner is induced by a generalized shift-splitting of the generalized saddle point matrix, resulting in a generalized shift-splitting fixed-point iteration. Theoretical analyses show that the generalized shift-splitting iteration method is convergent and semi-convergent unconditionally for solving the nonsingular and the singular generalized saddle point problems, respectively. Numerical experiments of a model Navier-Stokes problem are implemented to demonstrate the feasibility and effectiveness of the proposed preconditioners. | ['Qin-Qin Shen', 'Quan Shi'] | Generalized shift-splitting preconditioners for nonsingular and singular generalized saddle point problems | 811,861 |
This paper describes the process of creating a design pattern management interface for a collection of mobile design patterns. The need to communicate how patterns are interrelated and work together to create solutions motivated the creation of this interface. Currently, most design pattern collections are presented in alphabetical lists. The Oracle Mobile User Experience team approach is to communicate relationships visually by highlighting and connecting related patterns. Before the team designed the interface, we first analyzed common relationships between patterns and created a pattern language map. Next, we organized the patterns into conceptual design categories. Last, we designed a pattern management interface that enables users to browse patterns and visualize their relationships. | ['Brent-Kaan William White'] | Visualizing mobile design pattern relationships | 163,505 |
Electroencephalography (EEG) and magnetoencephalography (MEG) measurements can be used to monitor neural activity, that is generally characterized using current or magnetic dipole source models with time-varying amplitude, position, and moment parameters. The EEG/MEG measurements, however, often contain artifacts that do not originate from the brain. These artifacts can include patient movement, normal heart electrical activity, muscle and eye movement, or equipment and environmental clutter. In this paper, we propose a novel neural activity estimation approach that integrates particle filtering with the probabilistic data association filter in order to validate neural measurements and suppress artifacts before estimating neural activity. Simulations using synthetic data with this approach demonstrate high performance in suppressing artifacts and tracking neural activity; results for real data are also presented. | ['Alexander Maurer', 'Lifeng Miao', 'Jun Jason Zhang', 'Narayan Kovvali', 'Antonia Papandreou-Suppappola', 'Chaitali Chakrabarti'] | EEG/MEG artifact suppression for improved neural activity estimation | 922,006 |
Query containment and query answering are two important computational tasks in databases. While query answering amounts to computing the result of a query over a database, query containment is the problem of checking whether, for every database, the result of one query is a subset of the result of another query. In this article, we deal with unions of conjunctive queries, and we address query containment and query answering under description logic constraints. Every such constraint is essentially an inclusion dependency between concepts and relations, and their expressive power is due to the possibility of using complex expressions in the specification of the dependencies, for example, intersection and difference of relations, special forms of quantification, regular expressions over binary relations. These types of constraints capture a great variety of data models, including the relational, the entity-relationship, and the object-oriented model, all extended with various forms of constraints. They also capture the basic features of the ontology languages used in the context of the Semantic Web. We present the following results on both query containment and query answering. We provide a method for query containment under description logic constraints, thus showing that the problem is decidable, and analyze its computational complexity. We prove that query containment is undecidable in the case where we allow inequalities in the right-hand-side query, even for very simple constraints and queries. We show that query answering under description logic constraints can be reduced to query containment, and illustrate how such a reduction provides upper-bound results with respect to both combined and data complexity. | ['Diego Calvanese', 'Giuseppe De Giacomo', 'Maurizio Lenzerini'] | Conjunctive query containment and answering under description logic constraints | 237,520 |
Energy management in sensor networks is crucial to prolong the network lifetime. Though existing sleep scheduling algorithms save energy, they lead to a large increase in end-to-end latency. We propose a new Sleep schedule (Q-MAC) for Query based sensor networks that provides minimum end-to-end latency with energy efficient data transmission. Whenever there is no query, the radios of the nodes sleep more using a static schedule. Whenever a query is initiated, the sleep schedule is changed dynamically. Based on the destinations location and packet transmission time, we predict the data arrival time and retain the radio of a particular node, which has forwarded the query packet, in the active state until the data packets are forwarded. Since our dynamic schedule alters the active period of the intermediate nodes in advance by predicting the packet arrival time, data is transmitted to the sink with low end-to-end latency.The objectives of our protocol are to (1) minimize the end-to-end latency by alerting the intermediate nodes in advance using the dynamic schedule (2) reduce energy consumption by activating the neighbor nodes only when packets (query and data) are transmitted. Simulation results show that Q-MAC performs better than S-MAC by reducing the latency up to 80% with minimum energy consumption. | ['N.A. Vasanthi', 'Suganya Annadurai'] | Energy Efficient Sleep Schedule for Achieving Minimum Latency in Query based Sensor Networks | 236,391 |
We introduce a new robust cache-based timing attack on AES. We present experiments and concrete evidence that our attack can be used to obtain secret keys of remote cryptosystems if the server under attack runs on a multitasking or simultaneous multithreading system with a large enough workload. This is an important difference to recent cache-based timing attacks as these attacks either did not provide any supporting experimental results indicating if they can be applied remotely, or they are not realistically remote attacks. | ['Onur Aciiçmez', 'Werner Schindler', 'Çetin Kaya Koç'] | Cache based remote timing attack on the AES | 141,500 |
Medical Concept Resolution. | ['Nitish Aggarwal', 'Ken Barker', 'Christopher A. Welty'] | Medical Concept Resolution. | 732,531 |
IP telephony over mobile ad hoc networks is a topic of emerging interest in the research arena as one of the paths toward the fixed-mobile convergence in telecommunications networks. To investigate the performance characteristics of this service, we propose a complete system architecture, which includes a MAC protocol, a routing protocol, and the treatment of voice packets. The telephone system is analyzed in the case of point-to-point calls inside the ad hoc network, and the end-to-end performance is assessed in terms of the percentage of blocked and dropped calls, packet loss and packet delay. The analysis takes into account network scalability by investigating how; the size of the multihop ad hoc network impacts the quality of service. Moreover, the synthetic mean opinion score of the telephone service is evaluated according to the ITU-T E-model | ['Paolo Giacomazzi', 'Luigi Musumeci', 'Giuseppe Caizzone', 'Giacomo Verticale', 'G. Liggieri', 'Agostino Proietti', 'Stefano Sabatini'] | Quality of service for packet telephony over mobile ad hoc networks | 113,972 |
In this paper we explore different techniques that allow the user to direct interactive evolutionary search. Broadening interaction beyond simple evaluation increases the amount of feedback and bias a user can apply to the search. Increased feedback will have the effect of directing the algorithm to more fruitful areas of the search space. This paper examines whether additional feedback from the user can be a benefit to the problem of evolutionary design. We find that the interface between the user and the search space plays a vital role in this process. | ['Jonathan Byrne', 'Erik Hemberg', "Michael O'Neill"] | Interactive operators for evolutionary architectural design | 490,284 |
In this paper we consider the State-Dependent Wiretap Channel (SD-WC). As the main idea, we model the SD-WC as a Cognitive Interference Channel (CIC), in which the primary receiver acts as an eavesdropper for the cognitive transmitter's message. By this point of view, the Channel State Information (CSI) in SD-WC plays the role of the primary user's message in CIC which can be decoded at the eavesdropper. This idea enables us to use the main achievability approaches of CIC, i.~e., Gel'fand-Pinsker Coding (GPC) and Superposition Coding (SPC), to find new achievable equivocation-rates for the SD-WC. We show that these approaches meet the capacity under some constraints on the rate of the channel state. Similar to the dirty paper channel, extending the results to the Gaussian case shows that the GPC lead to the capacity of the Gaussian SD-WC which is equal to the capacity of the wiretap channel without channel state. Hence, we achieve the capacity of the Gaussian SD-WC using the dirty paper technique. Moreover, our proposed approaches provide the capacity of the Binary SD-WC. It is shown that the capacity of the Binary SD-WC is equal to the capacity of the Binary wiretap channel without channel state. | ['Hamid G. Bafghi', 'Babak Seyfe', 'Mahtab Mirmohseni', 'Mohammad Reza Aref'] | Capacity of the State-Dependent Wiretap Channel: Secure Writing on Dirty Paper | 802,117 |
For a given multi-hop route in an IEEE 802.16 mesh network, we are interested in finding its end-to-end capacity so that admission control can be performed. The end-to-end capacity is difficult to determine due to the interference between communicating nodes caused by the broadcast nature of radio propagation. In this paper, we first propose a method to determine link capacity between two nodes, after which a zone-based method is used to obtain the end-to-end capacity of a route. We demonstrate the effectiveness of the link capacity and end-to-end capacity computing methods through simulations. | ['Yu Ge', 'Chen-Khong Tham', 'Peng Yong Kong', 'Yew-Hock Ang'] | Capacity Estimation for IEEE 802.16 Wireless Multi-Hop Mesh Networks | 14,110 |
We present an approach for learning simple algorithms such as copying, multi-digit addition and single digit multiplication directly from examples. Our framework consists of a set of interfaces, accessed by a controller. Typical interfaces are 1-D tapes or 2-D grids that hold the input and output data. For the controller, we explore a range of neural network-based models which vary in their ability to abstract the underlying algorithm from training instances and generalize to test examples with many thousands of digits. The controller is trained usingQ-learning with several enhancements and we show that the bottleneck is in the capabilities of the controller rather than in the search incurred byQ-learning. | ['Wojciech Zaremba', 'Tomas Mikolov', 'Armand Joulin', 'Rob Fergus'] | Learning Simple Algorithms from Examples | 549,308 |
In this paper we propose an adaptive filter based on assumption that the error is t-distributed with X degree of freedom, The optimal system is updated by using the LMS-like algorithm. When the input is impulsive signals, the convergence of the algorithm with small X is faster than when large X is used. Simulation results also show that the convergence of the proposed method is faster than other LMS-variant that has been earlier proposed. | ['Junibakti Sanubari', 'Keiichi Tokuda'] | Fast convergence transversal adaptive filtering algorithm for impulsive environment based on T distribution assumption | 439,174 |
In this paper we describe a new approach to computer-based health promotion, based on a conversational model. We base our model on a collection of human-human email dialogues concerning healthy nutrition. Our system uses a database of tips and small pieces of advice, organised so that support for the advice, and arguments against the advice may be explored. The technical framework and initial evaluation results are described. | ['Alison Cawsey', 'Floriana Grasso', 'R. Jones'] | A Conversational Model for Health Promotion on the World Wide Web | 438,170 |
Grasping dynamic stability is an important quality index for robotic grasping. Previous studies on the grasping stability usually consider the problem under the quasi-static assumption. This paper systematically investigates the grasping dynamic behavior from both theoretical and simulation aspects for the purpose of practical measurements. The dynamic stability of a grasping system is related to the contact type between the grasped object and the fingers, the grasping configuration, the control law of the fingers, and the passive compliance of the fingers. Starting from the dynamic equations of the grasping system, Liapunov stability theory is used to study the grasping dynamic stability problem. Simulation was conducted in support of the theoretical hypothesis. A gripper system was developed incorporating real time three-dimensional grasp force sensing for each of the fingers with some initial experimental results given. | ['Y.F. Ki', 'S.K. Tso', 'Qinggang Meng'] | Grasping force measurement for dynamic grasp stability assessment | 207,464 |
High-resolution optical satellite images of the reconstructed agricultural fields in Sendai Plain damaged by tsunami caused by the 2011 Great East Japan Earthquake were analyzed to observe variability in crop growth. The main crop of the target area is paddy rice obtained through transplantation, and dry and wet direct seeding. Soybean is the second main crop, with some other vegetables being planted. Normalized difference vegetation index (NDVI) was computed from the images captured in early July in 2014, and early July, late July, and middle August in 2015. Growth variability in the same type of crops was recognized. The differences in the growth statuses resulting from the differences in crop types and cultivation methods appeared on the NDVI images. In a comparison of the three images acquired in 2015, the early July image showed the most obvious variability in crop growth. | ['Chinatsu Yonezawa', 'Manabu Watanabe'] | Monitoring of variability in crop growth on reconstructed agriculutural Land after the 2011 Great East Japan Earthquake | 931,176 |
Due to the wide existence of mixed pixels, the derivation of constituent components (endmembers) and their fractional proportions (abundances) at the subpixel scale has been given a lot of attention. The entire process is often referred to as mixed-pixel decomposition or spectral unmixing. Although various algorithms have been proposed to solve this problem, two potential issues still need to be further investigated. First, assuming the endmembers are known, the abundance estimation is commonly performed by employing a least-squares error criterion, which, however, makes the estimation sensitive to noise and outliers. Second, the mathematical intractability of the abundance non-negative constraint results in computationally expensive numerical approaches. In this paper, we propose an unsupervised decomposition method based on the classic maximum entropy principle, termed the gradient descent maximum entropy (GDME), aiming at robust and effective estimates. We address the importance of the maximum entropy principle for mixed-pixel decomposition from a geometric point of view and demonstrate that when the given data present strong noise or when the endmember signatures are close to each other, the proposed method has the potential of providing more accurate estimates than the popular least-squares methods (e.g., fully constrained least squares). We apply the proposed GDME to the subject of unmixing multispectral and hyperspectral data. The experimental results obtained from both simulated and real images show the effectiveness of the proposed method | ['Lidan Miao', 'Hairong Qi', 'Harold H. Szu'] | A Maximum Entropy Approach to Unsupervised Mixed-Pixel Decomposition | 427,734 |
The mobility-enabling protocol Mobile IP supports location registration but not paging. However, current cellular networks use registration as well as paging procedures to minimize signaling cost. Accordingly, an extension to Mobile IP using distributed individual paging, the so-called DIP-MIP, is proposed. In DIP-MIP, each mobile host derives its own paging area size by optimizing a signaling cost function based on its individual mobility pattern. The cost function itself may use either of two mobility models - fluid flow and random walk - and the performance of DIP-MIP is analyzed for both. The impact of various parameters on the DIP-MIP signaling cost is studied as well. The performance of DIP-MIP is shown to be superior to that of Mobile IP (MIP) in reducing signaling load, managing mobility and supporting a large number of mobile users in IP-based cellular networks. | ['Chansophea Chuon', 'Sumanta Guha'] | DIP-MIP: Distributed individual paging extension for mobile IP in IP-based cellular networks | 59,738 |
Arc flash personal protective equipment is generally selected based on one of two methods: an incident energy analysis method or a hazard/risk category method. Neither method adequately addresses the deployment of arc flash personal protective equipment using risk management principles and processes. The objective of this paper is to identify and apply risk management principles and methodology found in current standards to assist with the selection of arc flash personal protective equipment and to determine when deployment of the arc flash protective equipment is warranted. | ['Daniel Roberts'] | Arc flash personal protective equipment applying risk management principles - II | 332,771 |
Vertical Movement Control of Quad-thrust Aerial Robot - Design, Analysis and Experimental Validation. | ['Roman Czyba', 'Grzegorz Szafrański'] | Vertical Movement Control of Quad-thrust Aerial Robot - Design, Analysis and Experimental Validation. | 764,763 |
It is well known that the evolution of cooperative behaviour is dependant upon certain environmental conditions. One such condition that has been extensively studied is the use of a spatially structured population, whereby cooperation is favoured by a reduced number of interactions between cooperators and selfish cheaters. However, models that address the role of spatial structure typically use an individual-based approach, which can make analysis unnecessarily complicated. By contrast, non-spatial population genetics models usually consist entirely of a set of replicator equations, thereby simplifying analysis. Unfortunately, these models cannot traditionally be used to take account of spatial structure, since they assume that interaction between any pair of individuals in a population is equally likely. In this paper, we construct as model that is still based on replicator equations, but where spatial localisation with respect to the number of interactions between individuals is incorporated. Using this model, we are able to successfully reproduce the dynamics seen in more complex individual-based models. | ['Simon T. Powers', 'Richard A. Watson'] | Investigating the evolution of cooperative behaviour in a minimally spatial model | 245,570 |
Progressive Gaussian filtering using explicit likelihoods | ['Jannik Steinbring', 'Uwe D. Hanebeck'] | Progressive Gaussian filtering using explicit likelihoods | 603,905 |
Combining the achievement for the major pollutants emission reduction during the 11th Five-Year Plan period with the reduction demands in the 12th Five-Year Plan, an index system for major pollutants emission reduction was established based on the three primary reduction measures of structure emission reduction, engineering emission reduction, supervision emission reduction. The establishment of the index system could help to assess the emission reduction effect for the major pollutants in China, and provided a basis to the further research of the pollutants total amount control. China is still in the late-mid of industrialization during the period of the 12th Five- Year, and industrialization and urbanization are still in the accelerated development stage, the contradiction between resources and environment will be more concentrated. Energy saving and emission reduction has become the important implement for building a resource-based, environment-friendly society, promoting economic restructuring, and changing the growth mode. How to reduce emission for the major pollutants through a scientific and rational method has been a significant issue in China. In recent years, scholars have developed a series of related studies, focused on the socio-economic impact studies for energy conservation (1), discussed the implementation of energy saving from the views of legal, policy (2), and energy conservation in the enterprise or industry (3). Some index systems have been constructed for assessing the emission reduction effect of a city or an area (4). However, there is a lack of independent and universal research to systematic analysis the effects of Chinese major pollutants total amount emission. Further, a set of scientific and reasonable evaluation index system is on demanded to set up that could be extensively applied to assess the result of emission reduction in different area. Therefore, this study attempts to build a universal index system for assessing the major pollutants emission reduction effect in China. | ['Zhuang Li', 'Yan Hu', 'Juan Li', 'Yu Li'] | Investigation on Index System for Major Pollutants Emission Reduction in Structure, Engineering and Supervision | 602,830 |
We present a neural network approach for the real-time optimization and control of interconnected nonlinear systems in the presence of more general constraints, i.e. equality and inequality constraints, and bound-constrained variables. For the interconnected system with bound-constrained variables, we transform it into an equivalent formulation without bound constraints. With the help of auxiliary variables, the inequality constrained problem is reformulated as a problem with only equality constraints. Moreover, an electrocircuit is proposed for implementing the Lagrange neurons in the inequality constrained systems. Simulation studies show that this proposed method is satisfactory for the real-time optimization and control of large-scale systems. | ['Zeng-Guang Hou', 'Min Tan', 'Madan M. Gupta', 'Peter N. Nikiforuk'] | Real-time optimization and computation for interconnected nonlinear systems using neural networks | 140,928 |
Mobile users are unlikely to guard against information security risks that do not come to mind in typical situations. As more people conduct sensitive transactions through mobile devices, what risks do they perceive? To inform the design of mobile applications we present a user study of perceived risk for information technology workers accessing company data, consumers using mobile personal banking, and doctors accessing medical records. Shoulder surfing and network snooping were the most commonly cited classes of risk, and perceived risk was influenced by the surrounding environment and source of information. However, overall risk awareness was low. The possible risks of device theft and loss, hacking, malware and data stored on devices were not prominent concerns. The study also revealed differences in the way the groups think about network-related threats. Based on these results, we suggest research directions for effective protection of sensitive data in mobile environments. | ['Shari Trewin', 'Calvin Swart', 'Lawrence Koved', 'Kapil Singh'] | Perceptions of Risk in Mobile Transaction | 841,692 |
It was demonstrated in earlier work that, by approximating its range kernel using shiftable functions, the nonlinear bilateral filter can be computed using a series of fast convolutions. Previous approaches based on shiftable approximation have, however, been restricted to Gaussian range kernels. In this work, we propose a novel approximation that can be applied to any range kernel, provided it has a pointwise-convergent Fourier series. More specifically, we propose to approximate the Gaussian range kernel of the bilateral filter using a Fourier basis, where the coefficients of the basis are obtained by solving a series of least-squares problems. The coefficients can be efficiently computed using a recursive form of the QR decomposition. By controlling the cardinality of the Fourier basis, we can obtain a good tradeoff between the run-time and the filtering accuracy. In particular, we are able to guarantee subpixel accuracy for the overall filtering, which is not provided by the most existing methods for fast bilateral filtering. We present simulation results to demonstrate the speed and accuracy of the proposed algorithm. | ['Sanjay Ghosh', 'Kunal Narayan Chaudhury'] | On Fast Bilateral Filtering Using Fourier Kernels | 692,551 |
Purpose#R##N##R##N##R##N##R##N##R##N#This paper aims to present a hybrid approach based on classification algorithms that was capable of identifying different types of phishing pages. In this approach, after eliminating features that do not play an important role in identifying phishing attacks and also after adding the technique of searching page title in the search engine, the capability of identifying journal phishing and phishing pages embedded in legal sites was added to the presented approach in this paper.#R##N##R##N##R##N##R##N##R##N#Design/methodology/approach#R##N##R##N##R##N##R##N##R##N#The hybrid approach of this paper for identifying phishing web sites is presented. This approach consists of four basic sections. The action of identifying phishing web sites and journal phishing attacks is performed via selecting two classification algorithms separately. To identify phishing attacks embedded in legal web sites also the method of page title searching is used and then the result is returned. To facilitate identifying phishing pages the black list approach is used along with the proposed approach so that the operation of identifying phishing web sites can be performed more accurately, and, finally, by using a decision table, it is judged that the intended web site is phishing or legal.#R##N##R##N##R##N##R##N##R##N#Findings#R##N##R##N##R##N##R##N##R##N#In this paper, a hybrid approach based on classification algorithms to identify phishing web sites is presented that has the ability to identify a new type of phishing attack known as journal phishing. The presented approach considers the most used features and adds new features to identify these attacks and to eliminate unused features in the identifying process of these attacks, does not have the problems of previous techniques and can identify journal phishing too.#R##N##R##N##R##N##R##N##R##N#Originality/value#R##N##R##N##R##N##R##N##R##N#The major advantage of this technique was considering all of the possible and effective features in identifying phishing attacks and eliminating unused features of previous techniques; also, this technique in comparison with other similar techniques has the ability of identifying journal phishing attacks and phishing pages embedded in legal sites. | ['Mehdi Dadkhah', 'Shahaboddin Shamshirband', 'Ainuddin Wahid Abdul Wahab'] | A hybrid approach for phishing web site detection | 930,546 |
A simple technique is proposed to adjust coefficients of transfer functions in switched-capacitor circuits. Transfer-function-coefficients are defined by capacitor-values considering full-charge transfer among capacitors. The proposed tuning technique in this paper is based on adjusting the amount of charge transferred from one capacitor to next capacitor. By all means, the net charge transferred in switched-capacitor circuits will effectively modify transfer function of a particular block without modifying individual capacitor values | ['Mustafa Keskin', 'Nurcan Keskin'] | A Tuning Technique for Switched-Capacitor Circuits | 63,879 |
On the Role of Compliance in Force Control | ['Andrea Calanca', 'Paolo Fiorini'] | On the Role of Compliance in Force Control | 732,905 |
New Results for Network Pollution Games | ['Eleftherios Anastasiadis', 'Xiaotie Deng', 'Piotr Krysta', 'Minming Li', 'Han Qiao', 'Jinshan Zhang'] | New Results for Network Pollution Games | 853,263 |
From 17.04.06 to 22.04.06, the Dagstuhl Seminar 06161 ``Simulation and Verification of Dynamic Systems'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general.#R##N#Links to extended abstracts or full papers are provided, if available. | ['David M. Nicol', 'Corrado Priami', 'Hanne Riis Nielson', 'Adelinde M. Uhrmacher'] | 06161 Abstracts Collection -- Simulation and Verification of Dynamic Systems | 206,468 |
This paper describes multimodal systems for ad-hoc search constructed by IBM for the TRECVID 2003 benchmark of search systems for broadcast video. These systems all use a late fusion of independently developed speech-based and visual content-based retrieval systems and outperform our individual retrieval systems on both manual and interactive search tasks. For the manual task, our best system used a query-dependent linear weighting between speech-based and image-based retrieval systems. This system has mean average precision (MAP) performance 20% above our best unimodal system for manual search. For the interactive task, where the user has full knowledge of the query topic and the performance of the individual search systems, our best system used an interlacing approach. The user determines the (subjectively) optimal weights A and B for the speech-based and image-based systems, where the multimodal result set is aggregated by combining the top A documents from system A followed by top B documents of system B and then repeating this process until the desired result set size is achieved. This multimodal interactive search has MAP 40% above our best unimodal interactive search system. | ['Arnon Amir', 'Giridharan Iyengar', 'Ching-Yung Lin', 'Milind R. Naphade', 'Apostol Natsev', 'Chalapathy Neti', 'Harriet J. Nock', 'John R. Smith', 'Belle L. Tseng'] | Multimodal video search techniques: late fusion of speech-based retrieval and visual content-based retrieval | 202,776 |
IMCReo: interactive Markov chains for Stochastic Reo | ['Nuno Oliveira 0001', 'Alexandra Silva', 'Luís Soares Barbosa'] | IMCReo: interactive Markov chains for Stochastic Reo | 837,209 |
We introduce a method for the problem of learning the structure of a Bayesian network using the quantum adiabatic algorithm. We do so by introducing an efficient reformulation of a standard posterior-probability scoring function on graphs as a pseudo-Boolean function, which is equivalent to a system of 2-body Ising spins, as well as suitable penalty terms for enforcing the constraints necessary for the reformulation; our proposed method requires 𝓞(n2) qubits for n Bayesian network variables. Furthermore, we prove lower bounds on the necessary weighting of these penalty terms. The logical structure resulting from the mapping has the appealing property that it is instance-independent for a given number of Bayesian network variables, as well as being independent of the number of data cases. | ['B. O’Gorman', 'Ryan Babbush', 'Alejandro Perdomo-Ortiz', 'Alán Aspuru-Guzik', 'V. N. Smelyanskiy'] | Bayesian network structure learning using quantum annealing | 379,391 |
Present paper deals the fusion of image analysis with electromagnetic and support vector machine (SVM) optimization approach to estimate the depth of shallow buried metallic and dummy mine (i.e., without explosive) objects with microwave remote sensing data at X-band (i.e., 10 GHz). The objects were buried under dry and smooth sand. For this purpose, a monostatic scatterometer at X-band has been indigenously developed, which consists a transmitter and receiver mounted on the stand of the sand pit and when operated it moves over it in X- and Y- axis. An algorithm has been proposed for identification of suspected region first i.e., region of interest (ROI) that contains buried objects in the image by proposing a quantity "detection figure" (D), which further proceed for depth estimation of buried objects. Algorithm includes image processing, electromagnetic multi layer interaction and SVM approach. The convolution-using image processing techniques has been applied to avoid the overlapping of the return signal. The support vector machine (SVM) approach has been analyzed for estimation of depth and an efficient method based on electromagnetic multiplayer interaction concept has been proposed to train the SVM. The depth estimated for Al sheet gives better result than dummy landmine, but the estimated depths results for both objects are in good agreement with actual depths. The present approach may be quite helpful to develop an automatic satellite data based information systems to estimate the depth of various shallow buried objects with satellite or air-borne radar data. | ['Dharmendra Singh'] | An efficient electromagnetic approach to train the SVM for depth estimation of shallow buried obi ects with microwave remote sensing data | 177,262 |
In this paper we examine the existence of correlation between movie content similarity and low level textual features from respective subtitles. In addition, we demonstrate the extraction of topical representation of movies based on subtitles mining. Using natural language processing and a topic modeling algorithm, namely Latent Dirichlet Allocation, applied on the movie subtitles, we extract the latent topic structure of a set of movies. In order to demonstrate the proposed content representation approach, we have built a dataset of 160 widely known movies, represented by their corresponding subtitles. After evaluating the resulting topics' quality and coherence, we move on to assert movie similarities, exploiting their distances in the topic populated space. Finally, using those topic-space projections of the movies, we aspire to create a topic model browser for movies, allowing us to explore the different aspects of similarities between movies and discover latent knowledge regarding the movies through the association of low-level topic links and high level movie similarities. | ['Konstantinos Bougiatiotis', 'Theodoros Giannakopoulos'] | Content Representation and Similarity of Movies based on Topic Extraction from Subtitles | 729,967 |
Problems and relevant technology pertaining to the prospects for developing an expert-system-supported environment for modeling and problem solving are reviewed. Four aspects of research with model-based support systems are examined: (1) computer-assisted modeling; (2) knowledge-based modeling; (3) automated modeling environments; and (4) model-based query systems. An approach to equipping systems with the ability to support rapid domain-specific refinement by appropriate user-experts is explored. > | ['Jay Weinroth'] | Model-based decision support and user modifiability | 124,530 |
COMPUTATIONAL SYMMETRY VIA PROTOTYPE DISTANCES FOR SYMMETRY GROUPS CLASSIFICATION | ['Manuel Agustí-Melchor', 'Angel Rodas-Jordá', 'José M. Valiente-González'] | COMPUTATIONAL SYMMETRY VIA PROTOTYPE DISTANCES FOR SYMMETRY GROUPS CLASSIFICATION | 793,931 |