abstract
stringlengths 5
11.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 4
367
| __index_level_0__
int64 0
1,000k
|
---|---|---|---|
Traditional approaches to Chinese Semantic Role Labeling (SRL) almost heavily rely on feature engineering. Even worse, the long-range dependencies in a sentence can hardly be modeled by these methods. In this paper, we introduce bidirectional recurrent neural network (RNN) with long-short-term memory (LSTM) to capture bidirectional and long-range dependencies in a sentence with minimal feature engineering. Experimental results on Chinese Proposition Bank (CPB) show a significant improvement over the state-ofthe-art methods. Moreover, our model makes it convenient to introduce heterogeneous resource, which makes a further improvement on our experimental performance. | ['Zhen Wang', 'Tingsong Jiang', 'Baobao Chang', 'Zhifang Sui'] | Chinese Semantic Role Labeling with Bidirectional Recurrent Neural Networks | 615,604 |
The distribution of unicyclic components in a random graph is obtained analytically. The number of unicyclic components of a given size approaches a self-similar form in the vicinity of the gelation transition. At the gelation point, this distribution decays algebraically, Uk (4k)?1 for k 1. As a result, the total number of unicyclic components grows logarithmically with the system size. | ['E. Ben-Naim', 'P. L. Krapivsky'] | Unicyclic components in random graphs | 589,945 |
Abstract Objective This paper presents an empirical study of a formative mobile-based assessment approach that can be used to provide students with intelligent diagnostic feedback to test its educational effectiveness. Method An audience response system called SIDRA was integrated with a neural network-based data analysis to generate diagnostic feedback for guided learning. A total of 200 medical students enrolled in a General and Descriptive Anatomy of the Locomotor System course were taught using two different methods. Ninety students in the experimental group used intelligent SIDRA (i-SIDRA), whereas 110 students in the control group received the same training but without employing i-SIDRA. Results In the students' final exam grades, a statistically significant difference was found between those students that used i-SIDRA as opposed to a traditional teaching methodology (T(162)=2.597; p=0.010). The increase in the number of correct answers during the feedback guided learning process from the first submission to the last submission in four multiple choice question tests was also analyzed. There were average increases of 20.00% (Test1), 11.34% (Test2), 8.88% (Test3) and 13.43% (Test4) in the number of correct answers. In a questionnaire rated on a five-point Likert-type scale, the students expressed satisfaction with the content (M=4.2) and feedback (M=3.5) provided by i-SIDRA and the methodology (M=4.2) used to learn anatomy. Conclusions The use of audience response systems enriched with feedback such as i-SIDRA improves medical degree students' performance as regards anatomy of the locomotor system. The knowledge state diagrams representing students' behavior allow instructors to study their progress so as to identify what they still need to learn. | ['José Luis Fernández-Alemán', 'Laura López-González', 'Ofelia González-Sequeros', 'Chrisina Jayne', 'Juan José López-Jiménez', 'Ambrosio Toval'] | The evaluation of i-SIDRA – a tool for intelligent feedback – in a course on the anatomy of the locomotor system | 831,428 |
Network topology plays a critical role while designing and evaluating network protocols. Most existing topology generators are insufficient to reflect the real world network demands to a topology or to capture the Internet topology evolution such as the "flattening" Internet. They focused on the graph properties of a topology, thus, lacking of ability to model engineering features of the network. Some state-of-art topology generators that consider engineering factors fail to capture trends in both intra-AS and inter-AS connections, which are equally important for evaluating future network protocols. We have developed a topology generator GeoTopo, which is to our best knowledge the first scalable topology generator modeling engineering factors for both intra-AS and inter-AS topology generation. The engineering factors that GeoTopo considers include demographic and geographic features as well as business interests of ASes. We use GeoTopo to create and study three classes of topologies: the topology characterized mainly by graph-properties (Status Quo topology), the topology driven by peering at Internet Exchange Points (IXP topology) and the topology characterized by country backbones (CB topology). The SQ topology follows the degree-based model and serves as a baseline for capturing topology features. The IXP and CB topologies model two major directions of the Internet "flattening". The three classes of topologies enable us to analyze the impact of engineering factors on topology generation such as AS peering policies, IXP deployment and AS geo-settings. GeoTopo's ability to generate projected future Internet topologies make it a valuable tool for the design and evaluation of Future Internet Architectures that is currently under consideration in the research community. We use the evaluation of Global Name Resolution Service (GNRS), a key component shared by name-based network architectures, as an example application to demonstrate GeoTopo's capability to capture the mobility of network entities, the locality of the traffic, and the impact of the evolving network. | ['Yi Hu', 'Feixiong Zhang', 'K. K. Ramakrishnan', 'Dipankar Raychaudhuri'] | GeoTopo: A PoP-level Topology Generator for Evaluation of Future Internet Architectures | 696,918 |
High level synthesis (HLS) has been mainly concerned with datapath synthesis of a digital system. Consequently, controller effects are often ignored when performing HLS tasks. However, the controller may sometimes have significant contributions to the overall system area and delay. Thus, it is necessary to incorporate the controller effects during HLS. Since control synthesis tools such as MISII are time consuming, it is not feasible to synthesize a controller netlist every time a high level design decision is made. As a result, it is necessary to estimate the controller contribution. As a first step towards a comprehensive prediction scheme, we present a simple yet effective controller estimation model which can be invoked during the register-transfer synthesis phase of HLS, which attempts to reflect the incremental effects of iterative RT level transformations on the controller area and delay. Our model has been bench-marked and found to efficiently account for the controller area and delay. > | ['Champaka Ramachandran', 'Fadi J. Kurdahi'] | Incorporating the controller effects during register transfer level synthesis | 920,978 |
In this document, the Firefly Algorithm (FA) and Cuckoo Search (CS) algorithm based on optimal location and the capacity of UPFC to improve the dynamic stability of the power system are proposed. The novelty of the proposed method is exemplified in the improved searching ability, random reduction and reduced complexity. In this regard, the generator fault affects the system dynamic stability constraints such as voltage, power loss, real and reactive power. Here, the FA technique optimizes the maximum power loss line as the suitable location of the UPFC. The affected location parameters and dynamic stability constraints are restored into secure limits using the optimum capacity of the UPFC, which in turn, has been optimized with reduced cost by using the CS algorithm. The attained capacity of the UPFC has been located in the affected location and the power flow of the system analyzed. The proposed method is implemented in the MATLAB/Simulink platform and tested under IEEE 30 and IEEE 14 standard bench mark system. The proposed method performance is evaluated by comparison with those of different techniques such as ABC-GSA, GSA-Bat, Bat-FA and CS algorithms. The comparison results invariably prove the effectiveness of the proposed method and confirm its potential to solve the related problems. | ['B. Vijay Kumar', 'N. V. Srikanth'] | A hybrid approach for optimal location and capacity of UPFC to improve the dynamic stability of the power system | 905,439 |
Reduction of Single Input Single Output (SISO) discrete systems into Reduced Order Model (ROM), using a conventional and a bio-inspired evolutionary technique is presented in this paper. In the conventional technique, mixed advantages of Modified Cauer Form (MCF) and differentiation are used. In this method, the original discrete system is first converted into equivalent continuous system by applying bilinear transformation. The denominator of the equivalent continuous system and its reciprocal are differentiated successively and the reduced denominator of the desired order is obtained by combining the differentiated polynomials. The numerator is obtained by matching the quotients of MCF. Finally, the reduced continuous system is converted back into discrete system using inverse bilinear transformation. In the evolutionary technique method, Differential Evolution (DE) optimization technique is employed to reduce the higher order model. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example. | ['Jigyendra Sen Yadav', 'N. P. Patidar', 'Jyoti Singhai', 'Sidhartha Panda'] | Differential Evolution algorithm for model reduction of SISO discrete systems | 440,411 |
The first GSSI Summer Meeting on Algorithms was held at the Gran Sasso Science Institute in L’Aquila, Italy, on July, 9th, 2016. The Gran Sasso Science Institute (GSSI) is a new international research center and PhD school. The GSSI has been recently funded with the objective to create a new center of scientific excellence in L’Aquila fostering the skills and highly specialized structures already present in the area, such as the Gran Sasso National Laboratories of the National Institute for Nuclear Physics (INFN) and the University of L’Aquila. | ['Michele Flammini', 'Giuseppe Persiano'] | Report on 1st GSSI Summer Meeting on Algorithms | 919,691 |
This paper presents a multi-objective optimisation technique for the design of a static synchronous series compensator (SSSC)-based controller. The design objective is to improve the transient performance of a power system subjected to a severe disturbance by minimising the power angle, terminal voltage and power flow time trajectory deviations with respect to a post-contingency equilibrium point for a power system installed with a SSSC. A genetic algorithm (GA)-based solution technique is applied to generate a Pareto set of global optimal solutions to the given multi-objective optimisation problem in power systems. The optimal gain and time constant values for the proposed SSSC-based controller are determined in a generator-infinite-bus test system. Simulation results show the effectiveness and robustness of the proposed approach in improving transient performance of the example power system. | ['Sidhartha Panda', 'Sarat Chandra Swain', 'A. K. Baliarsingh', 'A.K. Mohanty'] | SSSC-based controller design employing a multi-objective optimisation technique | 252,865 |
This study examined the importance assigned by Human Resource personnel to the personality traits of Information Technology officers. The extent to which these traits were evident among Information Technology officers was determined and compared to their level of importance among Human Resource personnel. The well-known 16 Personality Factors model was used and data was collected by questionnaire from 84 Information Technology officers working at operational levels in organizations and 64 Human Resource personnel with experience in the recruitment of Information Technology officers. For most of the traits the findings showed reasonable agreement between the level of importance of the traits according to the Human Resource personnel and the extent to which traits were evident among the Information Technology officers. However, there were differences with respect to the four traits Friendliness, Introversion, Sensitivity, and Intellect and the practical implications of these findings are discussed. | ['Puckpimon Singhapong', 'Graham Kenneth Winley'] | Personality Traits among Information Technology Officers and the Expectations of Human Resource Personnel | 901,104 |
The Earth's atmosphere heavily affects the remote sensing images collected by spaceborne passive optical sensors due to radiation-matter interaction phenomena like radiation absorption, scattering, and thermal emission. A complex phenomenon is the adjacency effect, i.e., radiation reflected by the ground that, due to the atmospheric scattering, is being seen in a viewing direction different from that corresponding to the ground location that reflected it. Adjacency gives rise to crosstalk between neighboring picture elements up to a distance that depends on the width of the integral kernel function employed for the mathematical modeling of the problem. As long as the atmosphere is a linear space-invariant system, the adjacency can be modeled as a low-pass filter, with the atmospheric point spread function (APSF) applied to the initial image. In this paper, a direct method of estimating the discrete normalized APSF (NAPSF) using images gathered by high-resolution optical sensors is discussed. We discuss the use of the NAPSF estimate for deducing the Correction Spatial high-pass Filter (CSF)-a correction filter that removes the adjacency effect. The NAPSF estimation procedure has been investigated using statistical simulations, whose outcomes permitted us to identify the conditions under which the NAPSF could be measured with acceptable errors. The NAPSF estimation is examined for various natural images acquired by MOMS-2P, CHRIS, AVIRIS, and MIVIS. | ['A. A. Semenov', 'Alexander V. Moshkov', 'Victor. N. Pozhidayev', 'Alessandro Barducci', 'Paolo Marcoionni', 'Ivan Pippi'] | Estimation of Normalized Atmospheric Point Spread Function and Restoration of Remotely Sensed Images | 347,027 |
The problem of adaptive segmentation of images of objects with smooth surfaces is addressed. The images are composed of regions of slowly varying intensities that may be corrupted by additive noise. The underlying field is modeled by Markov random field that consists of both a label process which contains the classification of each pixel in the image and intensity functions which contain the possible grey levels that each pixel may take. The algorithm iteratively repeats two steps: the parameter estimation step, in which the maximum-likelihood (ML) estimates of the associated parameters are obtained; and the restoration step, in which the underlying field is estimated through the maximum-a-posteriori (MAP) method. The concept of allowing the pixel grey values to vary across the image regions is discussed. These values are estimated by using windows on the observed data. As the algorithm progresses, the window size is decreased so that the algorithm adapts to the characteristics of each region. > | ['George K. Gregoriou', 'Amir Waks', 'Oleh J. Tretiak'] | Adaptive segmentation of images of objects with smooth surfaces | 340,820 |
The uniqueness (or otherwise) of test outputs ought to have a bearing on test effectiveness, yet it has not previously been studied. In this paper we introduce a novel test suite adequacy criterion based on output uniqueness. We propose 4 definitions of output uniqueness with varying degrees of strictness. We present a preliminary evaluation for web application testing that confirms that output uniqueness enhances fault-finding effectiveness. The approach outperforms random augmentation in fault finding ability by an overall average of 280% in 5 medium sized, real world web applications. | ['Nadia Alshahwan', 'Mark Harman'] | Augmenting test suites effectiveness by increasing output diversity | 481,765 |
In-app reflection guidance for workplace learning means moti- vating and guiding users to reflect on their working and learning, based on users' activities captured by the app. In this paper, we present a generic concept for such in-app reflection guidance for workplace learn- ing, its implementation in three different applications, and its evaluation in three different settings (one setting per app). From this experience, we draw the following lessons learned: First, the implemented in-app reflection guidance components are perceived as useful tools for reflective learning and their usefulness increases with higher usage rates. Second, smart technological support is sufficient to trigger reflection, however with different implemented components also reflective learning takes place on different stages. A sophisticated, unobtrusive integration in the working environment is not trivial at all. Automatically created prompts need a sensible timing in order to be perceived as useful and must not disrupt the current working processes. | ['Angela Fessl', 'Gudrun Wesiak', 'Verónica Rivera-Pelayo', 'Sandra Feyertag', 'Viktoria Pammer'] | In-App Reflection Guidance for Workplace Learning | 591,873 |
This paper presents a novel approach to suppressing adverse effects of external noise on body-conducted soft speech for silent speech communication in noisy environments. Nonaudible murmur (NAM) microphone as one of the body-conductive microphones is capable of detecting very soft speech. However, body-conducted soft speech easily suffers from external noise owing to its faint volume. To address this issue, the proposed method additionally uses an air-conductive microphone to detect only an external noise signal and uses the detected external noise signal to suppress its effect on the body-conducted soft speech. A semi-blind source separation technique is appued to the proposed method for estimating a linear filter to suppress the noise components without voice activity detection. Experimental results demonstrate that the proposed method yields 10 dB SNR improvements in 80 dBA noisy conditions and also yields significant improvements in sound quality of body-conducted soft speech. | ['Yusuke Tajiri', 'Tomoki Toda', 'Satoshi Nakamura'] | Noise suppression method for body-conducted soft speech enhancement based on external noise monitoring | 777,619 |
Conductance-based compartment modeling requires tuning of many parameters to fit the neuron model to target electrophysiological data. Automated parameter optimization via evolutionary algorithms (EAs) is a common approach to accomplish this task, using error functions to quantify differences between model and target. We present a three-stage EA optimization protocol for tuning ion channel conductances and kinetics in a generic neuron model with minimal manual intervention. We use the technique of Latin hypercube sampling in a new way, to choose weights for error functions automatically so that each function influences the parameter search to a similar degree. This protocol requires no specialized physiological data collection and is applicable to commonly-collected current clamp data and either single- or multi-objective optimization. We applied the protocol to two representative pyramidal neurons from layer 3 of the prefrontal cortex of rhesus monkeys, in which action potential firing rates are significantly higher in aged compared to young animals. Using an idealized dendritic topology and models with either 4 or 8 ion channels (10 or 23 free parameters respectively), we produced populations of parameter combinations fitting the target datasets in less than 80 hours of optimization each. Passive parameter differences between young and aged models were consistent with our prior results using simpler models and hand tuning. We analyzed parameter values among fits to a single neuron to facilitate refinement of the underlying model, and across fits to multiple neurons to show how our protocol will lead to predictions of parameter differences with aging in these neurons. | ['Timothy H. Rumbell', 'Danel Draguljić', 'Aniruddha Yadav', 'Patrick R. Hof', 'Jennifer I. Luebke', 'Christina M. Weaver'] | Automated evolutionary optimization of ion channel conductances and kinetics in models of young and aged rhesus monkey pyramidal neurons | 714,880 |
We introduce a novel representation of structured polynomial ideals, which we refer to as chordal networks. The sparsity structure of a polynomial system is often described by a graph that captures the interactions among the variables. Chordal networks provide a computationally convenient decomposition into simpler (triangular) polynomial sets, while preserving the underlying graphical structure. We show that many interesting families of polynomial ideals admit compact chordal network representations (of size linear in the number of variables), even though the number of components is exponentially large. Chordal networks can be computed for arbitrary polynomial systems using a refinement of the chordal elimination algorithm from [D. Cifuentes and P. A. Parrilo, SIAM J. Discrete Math., 30 (2016), pp. 1534--1570]. Furthermore, they can be effectively used to obtain several properties of the variety, such as its dimension, cardinality, and equidimensional components, as well as an efficient probabilistic tes... | ['Diego Cifuentes', 'Pablo A. Parrilo'] | Chordal networks of polynomial ideals | 714,692 |
In this paper, we modify the general iterative method to approximate a common element of the set of solutions of split variational inclusion problem and the set of common fixed points of a finite family of k -strictly pseudo-contractive nonself mappings. Strong convergence theorem is established under some suitable conditions in a real Hilbert space, which also solves some variational inequality problems. Results presented in this paper may be viewed as a refinement and important generalizations of the previously known results announced by many other authors. Finally, some examples to study the rate of convergence and some illustrative numerical examples are presented. | ['Jitsupa Deepho', 'Phatiphat Thounthong', 'Poom Kumam', 'Supak Phiangsungnoen'] | A new general iterative scheme for split variational inclusion and fixed point problems of k -strict pseudo-contraction mappings with convergence analysis | 893,508 |
Despite the increased awareness that exploiting the large amount of semantic data requires statistics-based inference capabilities, only little work can be found on this direction in the Semantic Web research. On semantic data, supervised approaches, particularly kernelbased Support Vector Machines (SVM), are promising. However, obtaining the right features to be used in kernels is an open problem because the amount of features that can be extracted from the complex structure of semantic data might be very large. Further, combining several kernels can help to deal with efficiency and data sparsity but creates the additional challenge of identifying and joining different subsets of features or kernels, respectively. In this work, we solve these two problems by employing the strategy of dynamic feature construction to compute a hypothesis, representing the relevant features for a set of examples. Then, a composite kernel is obtained from a set of clause kernels derived from components of the hypothesis. The learning of the hypothesis and kernel (s) is performed in an interleaving fashion. Based on experiments on real-world datasets, we show that the resulting relational kernel machine improves the SVM baseline. | ['Veli Bicer', 'Thanh Tran', 'Anna Gossen'] | Relational kernel machines for learning from graph-structured RDF data | 211,159 |
Multiple instance learning (MIL) is a paradigm in supervised learning that deals with the classification of collections of instances called bags. Each bag contains a number of instances from which features are extracted. The complexity of MIL is largely dependent on the number of instances in the training data set. Since we are usually confronted with a large instance space even for moderately sized real-world data sets applications, it is important to design efficient instance selection techniques to speed up the training process without compromising the performance. In this paper, we address the issue of instance selection in MIL. We propose MILIS, a novel MIL algorithm based on adaptive instance selection. We do this in an alternating optimization framework by intertwining the steps of instance selection and classifier learning in an iterative manner which is guaranteed to converge. Initial instance selection is achieved by a simple yet effective kernel density estimator on the negative instances. Experimental results demonstrate the utility and efficiency of the proposed approach as compared to the state of the art. | ['Zhouyu Fu', 'Antonio Robles-Kelly', 'Jun Zhou'] | MILIS: Multiple Instance Learning with Instance Selection | 96,901 |
Branch-and-bound (BB (2) we propose and implement several optimizations of our CUDA code by reducing branch divergence and by exploiting the properties of the GPU memory hierarchy; and(3) we evaluate our implementations and their optimizations on a modern GPU-based system and we report our experimental results. | ['Andrey Borisenko', 'Michael Haidl', 'Sergei Gorlatch'] | A GPU parallelization of branch-and-bound for multiproduct batch plants optimization | 817,161 |
It is well known that a negative feedback interconnection of passive systems is passive. However, the extension of this fundamental property to the case when there are time delays in communication, remains largely unaddressed. In this paper we demonstrate that a negative feedback interconnection of output strictly passive systems, under appropriate assumptions, is passive for non-increasing time delays and may loose passivity for increasing time delays. Passivity can be retained by inserting time-varying gains in the communication path, provided a bound on the maximum rate of change of delay is known. If the dynamical systems are passive, we appeal to the results in bilateral teleoperation, to recover passivity of the feedback interconnection.We show that by transforming the two systems into their scattering representation, transmitting the scattering variables as the new outputs, and using time-varying gains in the communication path, passivity of the feedback interconnection can be guaranteed independent of the time-varying delays. Finally we discuss the applicability of the proposed results for networked control of nonlinear mechanical systems. | ['Nikhil Chopra'] | Passivity results for interconnected systems with time delay | 421,197 |
A 2-server Private Information Retrieval (PIR) scheme allows a user to retrieve the i th bit of an n -bit database replicated among two noncommunicating servers, while not revealing any information about i to either server. In this work, we construct a 2-server PIR scheme with total communication cost n O (sflog log n log n ). This improves over current 2-server protocols, which all require Ω( n 1/3 ) communication. Our construction circumvents the n 1/3 barrier of Razborov and Yekhanin [2007], which holds for the restricted model of bilinear group-based schemes (covering all previous 2-server schemes). The improvement comes from reducing the number of servers in existing protocols, based on Matching Vector Codes, from 3 or 4 servers to 2. This is achieved by viewing these protocols in an algebraic way (using polynomial interpolation) and extending them using partial derivatives. | ['Zeev Dvir', 'Sivakanth Gopi'] | 2-Server PIR with Subpolynomial Communication | 975,049 |
The emerging Phase Change Memory (PCM) is considered as a promising candidate to replace DRAM as the next generation main memory since it has better scalability and lower leakage power. However, the high write power consumption has become a main challenge in adopting PCM as main memory. In addition to the fact that writing to PCM cells requires high write current and voltage, current loss in the charge pumps (CPs) also contributes a large percentage of the high power consumption. The pumping efficiency of a PCM chip is a concave function of the write current. Based on the characteristics of the concave function, the overall pumping efficiency can be improved if the write current is uniform. In this paper, we propose the peak-to-average (PTA) write scheme, which smooths the write current fluctuation by regrouping write units. An off-line optimal Integer Programming (IP) formulation and an efficient online algorithm are proposed to achieve this goal. Experimental results show that PTA can improve the charge pump efficiency to ∼40% with little overhead. Meanwhile, PTA can achieve 17.0% energy reduction on average. | ['Huizhang Luo', 'Jingtong Hu', 'Liang Shi', 'Chun Jason Xue', 'Qingfeng Zhuge'] | Peak-to-average pumping efficiency improvement for charge pump in Phase Change Memories | 684,775 |
Analysis of Respiratory Flow Signals to Identify Success of Patients on Weaning Trials. Analysis of Respiratory Flow Signals to Identify Success of Patients on Weaning Trials. J Clin Endocrinol Metab. 2008 Apr;84(3):1057-62.
Monson. "The Importance of Clinical Trial Data Regarding Longevity of Newly-Developed Patients." The Clinical Journal of the College of Medicine of Pennsylvania, Ann Arbor, PA, 2003;
http://books.icm.edu/sci/cq3/1 | ['Hernando González Acevedo', 'Carlos Arizmendi', 'Beatriz F. Giraldo'] | Analysis of Respiratory Flow Signals to Identify Success of Patients on Weaning Trials. | 747,457 |
Controlling an underactuated manipulator with less actuators than degrees of freedom is a challenging problem, specifically when it is to force the underactuated manipulator to track a given trajectory or to be configurated at a specific position in the work space. This paper presents two controllers for the set point regulation of 2-DOF underactuated manipulators. The first one is a cascade sliding mode tracking controller while the second one uses an inputoutput feedback linearization approach. The first algorithm builds on an observation that an underactuated manipulator can be treated as two subsystems. Consequently, a cascade sliding mode tracking controller has been designed. Firstly, a sliding mode surface is designed for both subsystems, these two sliding surfaces represent a first layer in the design architecture. A second layer sliding mode surface is then constructed based on the first layer sliding surface. The cascaded sliding mode controller is therefore deduced in terms of Lyapunov stability theorem. Robustness issues to bounded disturbances are then investigated. In a second stage of the paper, the input output feedback linearization (IOFL) control is presented. The latter, is then mixed to the sliding mode control scheme for robustness issues. Simulation results on 2-DOF whirling pendulum are presented to demonstrate the effectiveness of the proposed approach. | ['Sonia Mahjoub', 'Faical Mnif', 'Nabil Derbel'] | Set point stabilization of a 2DOF underactuated manipulator | 192,161 |
Femtocells, which is one of the methods followed by 4/5G cellular system, are used for handling increasing data traffic in heterogeneous (HetNets) and dense networks (DenseNets). In this case, traditional mobility and session management approaches become obsolete and more appropriate methods need to be developed. Therefore, we propose a Distributed Mobility Management (DMM) and a User Rate-Perceived (URP) algorithms over a 3-Tier Software Defined Network architecture. Delay will decrease, throughput will increase, fast handover, reduce overhead signaling and avoid bottleneck by implementing these algorithms to solve the problems of mobility and session management, and capacity in DenseNets over the existing centralized mobility management in Long Term Evolution (LTE). | ['Ibrahim Elgendi', 'Kumudu S. Munasinghe', 'Abbas Jamalipour'] | Mobility management in three-tier SDN architecture for DenseNets | 886,832 |
Jihadist groups such as ISIS are spreading online propaganda using various forms of social media such as Twitter and YouTube. One of the most common approaches to stop these groups is to suspend accounts that spread propaganda when they are discovered. This approach requires that human analysts manually read and analyze an enormous amount of information on social media. In this work we make a first attempt to automatically detect messages released by jihadist groups on Twitter. We use a machine learning approach that classifies a tweet as containing material that is supporting jihadists groups or not. Even tough our results are preliminary and more tests needs to be carried out we believe that results indicate that an automated approach to aid analysts in their work with detecting radical content on social media is a promising way forward. It should be noted that an automatic approach to detect radical content should only be used as a support tool for human analysts in their work. | ['Michael Ashcroft', 'Ali Fisher', 'Lisa Kaati', 'Enghin Omer', 'Nico Prucha'] | Detecting Jihadist Messages on Twitter | 608,683 |
Automatic Generation of Multi Platform Web Map Mobile Applications. Automatic Generation of Multi Platform Web Map Mobile Applications. An Introduction to Geospatial Applications Acknowledgments This work was supported by a grant from the United Nations Office of the High Commissioner for Refugees (OEP). Support for this work come through the Department of the Interior's Office of the High Commissioner, and grants are available at: http://www.ohio.gov/ohio/grants/
About the Department of the Interior
The U.S. Department of the | ['Marta Cimitile', 'Michele Risi', 'Genoveffa Tortora'] | Automatic Generation of Multi Platform Web Map Mobile Applications. | 737,260 |
TiEE - The Telemedical ILOG Event Engine: Optimizing Information Supply in Telemedicine. TiEE - The Telemedical ILOG Event Engine: Optimizing Information Supply in Telemedicine.
*For more information, please contact:
<>
EVE Online
I love to see what you guys have been coding in. So, welcome!Please note that I cannot participate if you are already using the current version available by default and you are using the official version. If you do not install the version from the repository for which you wish to work, | ['Sven Meister', 'Sven Schafer', 'Valentin Stahlmann'] | TiEE - The Telemedical ILOG Event Engine: Optimizing Information Supply in Telemedicine. | 803,857 |
Abstract For general objects, and for illumination from a general direction, we study the constraints on shape imposed by shading. Assuming generalized Lambertian reflectance, we argue that, for a typical image, shading determines shape essentially up to a finite ambiguity. Thus regularization is often unnecessary, and should be avoided. More conjectural arguments imply that shape is typically determined with little ambiguity. However, it is pointed out that the degree to which shape is constrained depends on the image. Some images uniquely determine the imaged surface, while, for others, shape can be uniquely determined over most of the image, but infinitely ambiguous in small regions bordering the image boundary, even though the image contains singular points. For these images, shape from shading is a partially well-constrained problem. The ambiguous regions may cause shape reconstruction to be unstable at the image boundary. Our main result is that, contrary to previous belief, the image of the occluding boundary does not strongly constrain the surface solution. Also, it is shown that characteristic strips are curves of steepest ascent on the imaged surface. Finally, a theorem characterizing the properties of generic images is presented. | ['John Oliensis'] | Shape from shading as a partially well-constrained problem | 555,038 |
Dynamic optimisation is an area of application where randomised search heuristics like evolutionary algorithms and artificial immune systems are often successful. The theoretical foundation of this important topic suffers from a lack of a generally accepted analytical framework as well as a lack of widely accepted example problems. This article tackles both problems by discussing necessary conditions for useful and practically relevant theoretical analysis as well as introducing a concrete family of dynamic example problems that draws inspiration from a well-known static example problem and exhibits a bi-stable dynamic. After the stage has been set this way, the framework is made concrete by presenting the results of thorough theoretical and statistical analysis for mutation-based evolutionary algorithms and artificial immune systems. | ['Thomas Jansen', 'Christine Zarges'] | Analysis of randomised search heuristics for dynamic optimisation | 434,624 |
Given a set of objects and a query q, a point p is called the reverse k nearest neighbor (RkNN) of q if q is one of the k closest objects of p. In this paper, we introduce the concept of influence zone which is the area such that every point inside this area is the RkNN of q and every point outside this area is not the RkNN. The influence zone has several applications in location based services, marketing and decision support systems. It can also be used to efficiently process RkNN queries. First, we present efficient algorithm to compute the influence zone. Then, based on the influence zone, we present efficient algorithms to process RkNN queries that significantly outperform existing best known techniques for both the snapshot and continuous RkNN queries. We also present a detailed theoretical analysis to analyse the area of the influence zone and IO costs of our RkNN processing algorithms. Our experiments demonstrate the accuracy of our theoretical analysis. | ['Muhammad Aamir Cheema', 'Xuemin Lin', 'Wenjie Zhang', 'Ying Zhang'] | Influence zone: Efficiently processing reverse k nearest neighbors queries | 275,311 |
The paper designs and realizes the association discovery system among scientific data and literature, and elaborates the key technologies used in the system. Finally, the paper verifies the feasibility of the system by taking the field of wheat breeding as an example. By using the scientific data description method based on the facet classification, the system describes the scientific data entities in a higher level of granularity; By using the technologies, such as building technologies of the first correlated entity set, judging and indentifying technologies of the correlated entity pairs, and analyzing technologies of the entity association network, the system reveals the semantic relations between entities and associations of main subjects from scientific data and literature in selected fields to a certain extent. | ['Wei Sun', 'Xuefu Zhang', 'Huai Wang'] | Design and realization of the association discovery system among scientific data and literature | 416,004 |
Delay spread can be detrimental to communication systems. Orthogonal Frequency Division Multiplexing (OFDM)/Discrete Fourier Transform-spread-OFDM (DFT-s- OFDM) based systems (for e.g., Long Term Evolution (LTE), WiFi, etc.) tackle delay spread by adding a cyclic prefix (CP) before the start of the OFDM symbol. The duration of the CP is pre-determined and is set according to channel delay spread characteristics or to fit a certain frame duration. Moreover, the CP does not carry any useful data and represents an overhead that needs to be minimized. In this paper, we propose the use of DFT-s-OFDM with flexible, configurable guard interval (GI) of variable size. We call the resulting waveform GI-DFT-s-OFDM. The length of the GI can be adjusted to accommodate the maximum delay spread in a given scenario or for a particular user (UE). Additionally, the GI may carry a sequence that can be used for time/frequency synchronization. We also discuss system design based on GI-DFT-s-OFDM, giving possible options for the GI sequence. The proposed waveform has good Peak-to- Average Power Ratio (PAPR) with out-of-band emissions similar to OFDM/DFT-s-OFDM. | ['Utsaw Kumar', 'Christian Ibars', 'Abhijeet Bhorkar', 'Hyejung Jung'] | A Waveform for 5G: Guard Interval DFT-s-OFDM | 651,138 |
Linear approaches to resilient aggregation in sensor networks Linear approaches to resilient aggregation in sensor networks could use similar techniques to do just fine for deep networks and data centers. Here's a great example. As it happens, we're using SENSORS in SELinux (sensors library for open source Open Source, Open Data, Open Access, etc). When using these techniques to perform complex network functions in the open source protocol, I don't want to write software designed to do very few, simple things in a very large container with | ['Kevin J. Henry', 'Douglas R. Stinson'] | Linear approaches to resilient aggregation in sensor networks | 808,174 |
On the robustness of the biological correlation network model On the robustness of the biological correlation network model, several problems appear to exist. First, some factors are not correlated with one another, whereas others do. Second, the model does not incorporate a large amount of data in a single, random sequence. Indeed, some models allow for the existence of multiple potential predictors of the relationship between two conditions without revealing anything about which one is the strongest predictor, especially on a causal scale (e.g., to which one might be related to the other | ['Kathryn Dempsey', 'Hesham H. Ali'] | On the robustness of the biological correlation network model | 734,421 |
This paper studies two-party electoral competition in a setting where no policy is unbeatable. It is shown that if parties take turns in choosing policy platforms and observe eachother’s choices, for one party to change position so as to win is pointless since the other party never accepts an outcome where it is sure to loose. If there is any cost to changing platform, the prediction is that the game ends in the first period with the parties converging on whatever platform the incumbent chooses. If, however, there is a slight chance of a small mistake, the incumbent does best in choosing a local equilibrium platform. This suggests that local equilibrium policies can be the predicted outcome even if the voting process is not myopic in any way. | ['Jesper Roine'] | Downsian Competition When No Policy is Unbeatable | 267,436 |
An Evaluation Methodology for Concept Maps Mined from Lecture Notes: An Educational Perspective An Evaluation Methodology for Concept Maps Mined from Lecture Notes: An Educational Perspective. J. A. M. Hall, Ph.D., and J. A. M. Hall, Ph.D., are co-authors.
The use of linear interpolations to visualize the spatial distribution of each map, the use of spatial and temporal and temporal depth constraints, and the concept of how to use the data to shape and describe a particular environment will be explored in this paper as well | ['Thushari Atapattu', 'Katrina E. Falkner', 'Nickolas J. G. Falkner'] | An Evaluation Methodology for Concept Maps Mined from Lecture Notes: An Educational Perspective | 680,544 |
The real space computing technologies, such as the mobile computing technology, enable users to make use of computers anywhere in the world. On the other hand, the virtual space computing technologies enable users to use remote computer resources from their desktop environments through intuitive operations. By combining these two kinds of computing technologies, we can construct a more flexible and general platform for computing in either space. Based on this viewpoint, we have realized a communication environment, called the 'invisible person' environment, where virtual space and real space are strongly associated. In this paper, we discuss the system architecture of this environment. The policies that we took in its design are 1) reduction to a feasible design at present, 2) wide-spread popularity to become an invisible person, and 3) emphasis on the realization of communication rather than the concrete analysis and accurate presentation of the real space. These policies are reflected on our system design where we provide users with several kinds of browsers for the flexibility of their operations. | ['Masahiko Tsukamoto'] | Integrating real space and virtual space in the 'invisible person' communication support system | 906,685 |
An energy profile indicates the amount of energy consumed by different parts of a parallel or distributed simulation program. Creating energy profiles is not straightforward because high precision, low overhead energy measurement mechanisms may not be available, and it is not straightforward to determine the amount of energy consumed by different hardware components such as the CPU, memory system, or communication circuits that are operating concurrently throughout the execution of the distributed simulation. Techniques to create energy profiles of distributed simulation programs are described. A model is proposed that differentiates the energy consumed by the distributed simulation engine versus simulation application code, and energy consumed for computation versus that required for communication. A methodology and techniques are described to create energy profiles for these aspects of the distributed simulation. A study is described to illustrate this methodology to profile a distributed simulation synchronized by the Chandy/Misra/Bryant synchronization algorithm for a queuing network simulation. Empirical data are presented to validate the energy profile that is obtained. | ['Aradhya Biswas', 'Richard M. Fujimoto'] | Profiling Energy Consumption in Distributed Simulations | 730,604 |
Due to proliferation of mobile devices, the demand for video in cellular networks has increased exorbitantly. However, cellular networks have limited resources and the wireless medium is time-varying in nature. This necessitates the video streaming protocols to be re-designed taking into account the overall quality of experience (QoE) of the end users. In this paper, we propose a metric called enhanced-time varying subjective quality (eTVSQ) to measure the QoE of the video users. The eTVSQ accounts for time variation in QoE due to both rate adaption in HTTP streaming and playback interruption caused by rebuffering events. Based on this metric, we propose a rate adaptation strategy for HTTP video streaming in the downlink of cellular networks with α-fair resource allocation. The proposed method results in significant performance gains over the traditional throughput based rate adaptation strategy. | ['Nagabhushan Eswara', 'Sumohana S. Channappayya', 'A. Kumar', 'Kiran Kuchi'] | eTVSQ based video rate adaptation in cellular networks with α-fair resource allocation | 888,471 |
Guest editorial: Advances in IP-optical networking for IP quad-play traffic and services Guest editorial: Advances in IP-optical networking for IP quad-play traffic and services
Authors: Deryd C. Nell, Richard H. Kieser, Mike M. Wernick, Peter M. Williams and Richard S. Jones
Paper ID: 2710.35 (December 2012)
Abstract: It is now possible to create a secure TCP multi-play tunnel using a simple interface and encapsulation with IP address spoofing and IP multic | ['Admela Jukan', 'Masum Z. Hasan'] | Guest editorial: Advances in IP-optical networking for IP quad-play traffic and services | 646,893 |
A novel scheme is proposed to extract characters from dated postcards. The illustrations of the postcards appear in various languages and colors embedded in different backgrounds. Due to reproduction and uneven illumination, these characters suffer a severe degradation and hence extracting characters using conventional methods becomes difficult. A morphological operation is proposed to remove irrelevant backgrounds that are connecting to border edges of the postcards. As a result, characters become the most obvious objects. Followed by horizontal and vertical projections, the exact locations of the characters can be located. The proposed scheme has been executed on a set of color images of postcards and proved its efficacy | ['Shwu-Huey Yen', 'Mei-Fen Chen', 'Hwei-Jen Lin', 'Chia-Jen Wang', 'Chiu-Hsiang Liu'] | The extraction of characters on dated color postcards | 505,594 |
This paper studies the challenging problem of identifying unusual instances of known objects in images within an "open world" setting. That is, we aim to find objects that are members of a known class, but which are not typical of that class. Thus the "unusual object" should be distinguished from both the "regular object" and the "other objects". Such unusual objects may be of interest in many applications such as surveillance or quality control. We propose to identify unusual objects by inspecting the distribution of object detection scores at multiple image regions. The key observation motivating our approach is that "regular object" images, "unusual object" images and "other objects" images exhibit different region-level scores in terms of both the score values and the spatial distributions. To model these distributions we propose to use Gaussian Processes (GP) to construct two separate generative models, one for the "regular object" and the other for the "other objects". More specifically, we design a new covariance function to simultaneously model the detection score at a single location and the score dependencies between multiple regions. We demonstrate that the proposed approach outperforms comparable methods on a new large dataset constructed for the purpose. | ['Peng Wang', 'Lingqiao Liu', 'Chunhua Shen', 'Zi Huang', 'Anton van den Hengel', 'Heng Tao Shen'] | What’s Wrong with That Object? Identifying Images of Unusual Objects by Modelling the Detection Score Distribution | 823,423 |
Rough set theory is one of important tools of soft computing, and rough approximations are the essential elements in rough set models. However, the existing fuzzy rough set model for set-valued data, which is directly constructed based on a kind of similarity relation, fail to explicitly define fuzzy rough approximations. To solve this issue, in this paper, we propose two types of fuzzy rough approximations, and define two corresponding relative positive region reducts. Furthermore, two discernibility matrices and two discernibility functions are introduced to acquire these new proposed reducts, and the relationships among the new reducts and the existing reducts are also be provided. Theoretical analyses demonstrate that the new types of reducts have less redundancy and are more diverse (no lower number of reducts) than those obtained by means of the existing matrices, and experimental results illustrate the new reducts found by our methods outperform those obtained by existing method. | ['Wei Wei', 'Junbiao Cui', 'Jiye Liang', 'Junhong Wang'] | Fuzzy rough approximations for set-valued data | 717,420 |
Generalized Petersen graphs are an important class of commonly used interconnection networks and have been studied . The total domination number of generalized Petersen graphs P(m,2) is obtained in this paper. | ['Jianxiang Cao', 'Weiguo Lin', 'Minyong Shi'] | Total Domination Number of Generalized Petersen Graphs | 74,884 |
I have worked on several different language design and optimizing compiler projects, and I am often surprised by which ideas turn out to be the most successful. Oftentimes it is the simplest ideas that seem to get the most traction in the larger research or user community and therefore have the greatest impact. Ideas I might consider the most sophisticated and advanced can be challenging to communicate, leading to less influence and adoption. This effect is particularly pronounced when seeking to gain adoption among actual users, as opposed to other researchers. In this talk I will discuss examples of the tradeoffs among sophistication, simplicity, and impact in my previous research work in academia and in my current work at Google. | ['Craig Chambers'] | Expressiveness, simplicity, and users | 587,144 |
Finite Automata over Structures - (Extended Abstract). Finite Automata over Structures - (Extended Abstract). A great project, where we tried to use the modular algorithms for all structural types. What you may be getting here is an awesome "structure" from the "Extended Abstract" book.
The book "Structures and Subscriber's Choice" is not the best one, but it provides a ton of useful information and some of the best-reviewed ones. I highly recommend it.
(1) A | ['Aniruddh Gandhi', 'Bakhadyr Khoussainov', 'Jiamou Liu'] | Finite Automata over Structures - (Extended Abstract). | 776,621 |
This paper describes the architecture and design procedure of a DSP (digital signal processor) for the digital audio applications. The suggested DSP has fixed 24 bit data structure, 6 stage pipeline, and 125 instructions. Some of the instructions are specially designed for the audio signal processing. The designed DSP has been verified by comparing the results from CBS (cycle based simulator) and those of HDL simulation through the single instruction set test, the instruction combination test, and real audio applications. Finally, we confirm by the HDL simulation that the DSP carried out successfully out ADPCM and MPEG-2 AAC decoding algorithm. | ['Changwon Ryu', 'Hyung-Bae Park', 'Jusung Park', 'K. Kim'] | A compact DSP architecture for digital audio | 476,960 |
Measurement-based quantum computing (MBQC) is a universal model for quantum computation. The combinatorial characterisation of determinism in this model, powered by measurements, and hence, fundamentally probabilistic, is the cornerstone of most of the breakthrough results in this field. The most general known sufficient condition for a deterministic MBQC to be driven is that the underlying graph of the computation has a particular kind of flow called Pauli flow. The necessity of the Pauli flow was an open question. We show that the Pauli flow is necessary for real-MBQC, and not in general providing counterexamples for (complex) MBQC. We explore the consequences of this result for real MBQC and its applications. Real MBQC and more generally real quantum computing is known to be universal for quantum computing. Real MBQC has been used for interactive proofs by McKague. The two-prover case corresponds to real-MBQC on bipartite graphs. While (complex) MBQC on bipartite graphs are universal, the universality of real MBQC on bipartite graphs was an open question. We show that real bipartite MBQC is not universal proving that all measurements of real bipartite MBQC can be parallelised leading to constant depth computations. As a consequence, McKague techniques cannot lead to two-prover interactive proofs. | ['Simon Perdrix', 'Luc Sanselme'] | Determinism and Computational Power of Real Measurement-based Quantum Computation | 904,317 |
Image retargeting aims to avoid visual distortion while retaining important image content in resizing. However, current image retargeting methods usually fail in difficult situations, such as complex structures and intricate arrangements of objects. In this paper, we propose a novel image retargeting approach, which provides low visual distortion retargeting results by combining region warping and occlusion. We first represent image region with occlusion probability to avoid inaccuracy in image decomposition. Then we combine region warping and occlusion in a unified framework, and transform it to a pixel-level retargeting problem. To verify the performance of our approach, we implement it with two operators, seam carving and pixel warping. Experiments demonstrate the effectiveness of the proposed approach. | ['Zhongyan Qiu', 'Tongwei Ren', 'Yan Liu', 'Jia Bei', 'Muyang Song'] | Image Retargeting by Combining Region Warping and Occlusion | 582,593 |
Electronic data interchange (EDI) is inevitable in enabling successful collaborations between different business partners. Exchanging information electronically requires standardized formats for information exchange. There exist a variety of standards including bottom-up standards as well as top-down standards. However, business partners may utilize different standards resulting in a loss of interoperability. To cope with the variety of standardized formats we propose the approach of mapping these formats to the common concept of core components introduced by the United Nations Center for Trade Facilitation and Electronic Business (UN/CEFACT). We evaluate the applicability of different strategies for mapping an arbitrary XML Schema based standard to core components by the example of ebInterface, an Austrian invoicing standard. The evaluation provides evidence that mapping of arbitrary standards to core components indeed provides substantial benefit in leveraging the interoperability between different business document standards. | ['Christian Eis', 'Philipp Liegl', 'Christian Pichler', 'Michael Strommer'] | An Evaluation of Mapping Strategies for Core Components | 138,569 |
This paper describes our implementation of and initial experiences with DipZoom (for "Deep Internet Performance Zoom"), a novel approach to provide focused, on-demand Internet measurements. Unlike existing approaches that face a difficult challenge of building a measurement platform with sufficiently diverse measurements and measuring hosts, DipZoom implements a matchmaking service instead, which uses P2P concepts to bring together experimenters in need of measurements with external measurement providers. DipZoom offers the following two main contributions. First,since it is just a facilitator for an open community of participants, it promises unprecedented availability of diverse measurements and measuring points. Second, it can be used as a veneer over existing measurement platforms, automating the planning and execution of complex measurements. | ['Zhihua Wen', 'Sipat Triukose', 'Michael Rabinovich'] | Facilitating focused internet measurements | 196,070 |
Spectral unmixing is an important technology in hyperspectral image applications. Recently, sparse regression is widely used in hyperspectral unmixing. This paper proposes a double reweighted sparse regression method for hyperspectral unmixing. The proposed method enhances the sparsity of abundance fraction in both spectral and spatial domains through double weights, in which one is used to enhance the sparsity of endmembers in the spectral library, and the other to improve the sparseness of abundance fraction of every material. Experimental results on both synthetic and real hyperspectral data sets demonstrate effectiveness of the proposed method both visually and quantitatively. | ['Rui Wang', 'Heng-Chao Li', 'Wenzhi Liao', 'Aleksandra Pizurica'] | Double reweighted sparse regression for hyperspectral unmixing | 714,163 |
Nursing informatics (NI) can help provide effective and safe healthcare. This study aimed to describe current research trends in NI. In the summer 2015, the IMIA–NI Students Working Group created and distributed an online international survey of the current NI trends. A total of 402 responses were submitted from 44 countries. We identified a top five NI research areas: standardized terminologies, mobile health, clinical decision support, patient safety and big data research. NI research funding was considered to be difficult to acquire by the respondents. Overall, current NI research on education, clinical practice, administration and theory is still scarce, with theory being the least common. Further research is needed to explain the impact of these trends and the needs from clinical practice. | ['Laura Maria Peltonen', 'Dari Alhuwail', 'Samira Ali', 'Martha K. Badger', 'Gabrielle Jacklin Eler', 'Mattias Georgsson', 'Tasneem Islam', 'Eunjoo Jeon', 'Hyunggu Jung', 'Chiu Hsiang Kuo', 'Adrienne Lewis', 'Lisiane Pruinelli', 'Charlene Ronquillo', 'Raymond Francis Sarmiento', 'Janine Sommer', 'Jude L. Tayaben', 'Maxim Topaz'] | Current Trends in Nursing Informatics: Results of an International Survey. | 744,563 |
Massive quantities of digital data are being collected in every aspect of modern life. Examples include Personal photos and videos, biological and medical images and recordings from sensor arrays. To transform these massive data streams into useful information we use a sequence of "winnowing" stages. Each step reduces the size of the data by an order of magnitude; extracting the wheat form the chaff. In this talk I will describe this approach in a variety of contexts, ranging from the analysis of genetic pathways in fruit-fly embryos and C-Elegans worms to counting birds and helping elderly people living alone keep in touch with their family and caregivers. | ['Yoav Freund'] | Data winnowing | 679,641 |
Obtaining Shape from SEM Image Using Intensity Modification via Neural Network. Obtaining Shape from SEM Image Using Intensity Modification via Neural Network. Science 315:1044-1049 (2014).
44 Glick, E. W., Wilson, E. & Rabinow, D. N. The brain regions that contribute to visual acuity in humans: evidence from PET brain imaging. J. Neurosci. 28:2611-2619 (2012). | ['Yuji Iwahori', 'Kazuhiro Shibata', 'Haruki Kawanaka', 'Kenji Funahashi', 'Robert J. Woodham', 'Yoshinori Adachi'] | Obtaining Shape from SEM Image Using Intensity Modification via Neural Network. | 751,551 |
In this work, we propose a simple yet highly effective algorithm for tracking a target through significant scale and orientation change. We divide the target into a number of fragments and tracking of the whole target is achieved by coordinated tracking of the individual fragments. We use the mean shift algorithm to move the individual fragments to the nearest minima, though any other method like integral histograms could also be used. In contrast to the other fragment based approaches, which fix the relative positions of fragments within the target, we permit the fragments to move freely within certain bounds. Furthermore, we use a constant velocity Kalman filter for two purposes. Firstly, Kalman filter achieves robust tracking because of usage of a motion model. Secondly, to maintain coherence amongst the fragments, we use a coupled state transition model for the Kalman filter. Using the proposed tracking algorithm, we have experimented on several videos consisting of several hundred frames length each and obtained excellent results. | ['Viswanathan Srikrishnan', 'Tadinada Nagaraj', 'Subhasis Chaudhuri'] | Fragment Based Tracking for Scale and Orientation Adaptation | 933,175 |
Swii2, a HTML5/WebGL Application for Cellular Automata Debris Flows Simulation Swii2, a HTML5/WebGL Application for Cellular Automata Debris Flows Simulation. (AIC), (AIC), (TMC), (IIC), (IIAG), and (IIC-B), (XMPP), (XMP), (XMPP), and (XMPP.1), which contain the most complex data structures found.
The main idea is to simulate the interaction between a moving system's motion in a fixed and | ['Roberto Parise', 'Donato D’Ambrosio', 'Giuseppe Spingola', 'Giuseppe Filippone', 'Rocco Rongo', 'Giuseppe A. Trunfio', 'William Spataro'] | Swii2, a HTML5/WebGL Application for Cellular Automata Debris Flows Simulation | 564,267 |
Real-time 3D gesture visualisation for the study of Sign Language. Real-time 3D gesture visualisation for the study of Sign Language.
http://youtu.be/0L6mBwx4V3vg
The software is available at the following URL:
https://developer.microsoft.com/en-us/library/aa001434.aspx
Acknowledgements
The authors thank the authors of each paper for their feedback and for the assistance in creating the software. The authors also thank T | ['Roman Miletitch', 'Raphaël de Courville', 'Morgane Rébulard', 'Claire Danet', 'Patrick Doan', 'Dominique Boutet'] | Real-time 3D gesture visualisation for the study of Sign Language. | 781,298 |
This paper presents a functional programming language, based on Moggi’s monadic metalanguage. In the first part of this paper, we show how the language can be regarded as a monad on a category of signatures, and that the resulting category of algebras is equivalent to the category of computationally cartesian closed categories. In the second part, we extend the language to include a nondeterministic operational semantics, and show that the lower powerdomain semantics is fully abstract for may-testing. | ['Alan Jeffrey'] | A fully abstract semantics for a nondeterministic functional language with monadic types | 240,668 |
ESD protection has been an inevitable component of integrated circuits since the invention of semiconductor devices. A huge number of concepts and protection devices have been designed and optimized for this purpose by ESD engineers. Even it seems to be a simple functionality just to shunt a discharge current during an ESD event, almost each step in the shrinking path needs new adjustment of the protection circuits and sometimes even implementation of totally new concepts. Entering the sub 100 nm regime the protection development goes much beyond the development of a specific optimized protection element. A sophisticated protection network has to be designed which covers both the IO circuit and the core region, where low oxides thickness and low junction breakdown voltages lead to hard constraints on the maximum voltage overshoot during ESD. In especially designs with multiple power supply domains will complicate the ESD supply protection concept extremely. To achieve a good ESD robustness it will be necessary to consider the ESD protection as integral part of the IC development starting from the concept phase. To support this and to extract the necessary data for an ESD optimization an IC level ESD simulation approach is presented which analyses the critical discharge paths across the chip. | ['Harald Gossner'] | ESD protection for the deep sub micron regime - a challenge for design methodology | 303,590 |
Generating K-Anonymous Logs of People Tracing Systems in Surveilled Environments. Generating K-Anonymous Logs of People Tracing Systems in Surveilled Environments. I was going to post the whole thing in the comments so that people couldn't post anything. But on Monday, the first thing that came out was my friend @TheMageEleric got sent a batch of K-Anonymous logs from both companies and started chatting with people all over Australia. This time he sent my K-Anonymous logs on Monday morning to some people. Now I have a little info | ['Francesco Buccafurri', 'Gianluca Lax', 'Serena Nicolazzo', 'Antonino Nocera'] | Generating K-Anonymous Logs of People Tracing Systems in Surveilled Environments. | 803,261 |
Concurrency-related bugs may happen when multiple threads access shared data and interleave in ways that do not correspond to any sequential execution. Their absence is not guaranteed by the traditional notion of "data race" freedom. We present a new definition of data races in terms of 11 problematic interleaving scenarios, and prove that it is complete by showing that any execution not exhibiting these scenarios is serializable for a chosen set of locations. Our definition subsumes the traditional definition of a data race as well as high-level data races such as stale-value errors and inconsistent views. We also propose a language feature called atomic sets of locations, which lets programmers specify the existence of consistency properties between fields in objects, without specifying the properties themselves. We use static analysis to automatically infer those points in the code where synchronization is needed to avoid data races under our new definition. An important benefit of this approach is that, in general, far fewer annotations are required than is the case with existing approaches such as synchronized blocks or atomic sections. Our implementation successfully inferred the appropriate synchronization for a significant subset of Java's Standard Collections framework. | ['Mandana Vaziri', 'Frank Tip', 'Julian Dolby'] | Associating synchronization constraints with data in an object-oriented language | 195,553 |
A federated security scheme based on WS-Security standard for cross-domain grid is proposed. It integrates the WS-Security standard and the grid security mechanism. A trust model is established based on WS-Trust specification. A communication is established based on WS-SecureConversation specification. The architecture is implemented in a SAML-based federated authentication and authorization cross-domain Grid. Through experiment and analysis, it is shown that our scheme is secure, effective and efficient. | ['Yongkai Cai', 'Shaohua Tang'] | Security Scheme for Cross-Domain Grid: Integrating WS-Trust and Grid Security Mechanism | 543,235 |
Pooling robust shift-invariant sparse representations of acoustic signals Pooling robust shift-invariant sparse representations of acoustic signals within an octet. Signal processing using the TMP-based shift-invariant matrix. (I.) In conjunction with the TMP algorithm-defined algorithm for non-parametric waveforms with Gaussian fields, the CPPW-derived noise-domain domain is modeled with the TMP-based smoothing of waveforms with non-parametric Gaussian fields such that they form a homogenous, compact, | ['Po Sen Huang', 'Jianchao Yang', 'Mark Hasegawa-Johnson', 'Feng Liang', 'Thomas S. Huang'] | Pooling robust shift-invariant sparse representations of acoustic signals | 754,682 |
The assimilation of biophysical crop canopy variables retrieved from remotely sensed data into two crop models of differing degree of complexity is assessed in this study, in the context of the development of tools suitable for the estimation of yield losses due to drought. The more complex AQUACROP model, developed by FAO and the simpler SAFY model were employed to estimate wheat grain yield for an area in the Shaanxi Province in China through the assimilation of biophysical variables retrieved from Landsat and HJ1A and HJ1B satellites for three growing seasons (2013 to 2015). Results were validated with ground yield data. | ['Raffaele Casa', 'Paolo Cosmo Silvestro', 'Hao Yang', 'Stefano Pignatti', 'Simone Pascucci', 'G. Yang'] | Assimilation of remotely sensed canopy variables into crop models for an assessment of drought-related yield losses: A comparison of models of different complexity | 930,379 |
The medical care for patients with type 2 diabetes generally involves ingestion of oral hypoglycemic agents in order to lower their glucose level. When predicting the result of the medication using a classification approach, high prediction accuracy of the classifier is essential because of high misclassification costs. The application of a reject option to this approach supports more accurate prediction, allowing for human experts to examine when the classifier is unreliable to predict. In this paper, we propose a reject option framework based on heterogeneous ensemble learning through a two-phase fusion. The first phase is to calculate confidence scores, which are used to determine whether to predict, and the second phase is to derive final prediction results by fusing the outputs from multiple heterogeneous classifiers. We confirm the effectiveness of the proposed method to the anti-diabetic drug failure prediction problem through experiments on actual electronic medical records data of type 2 diabetes. The proposed method yields a better trade-off between accuracy and rejection than other reject options with statistical significance. A lower prediction error is obtained for the same degree of rejection. We obtained desirable accuracy for the anti-diabetic drug failure problem by applying the proposed reject option, which allows using the classification approach in practice. The accurate prediction of drug failure at the moment of prescription can assist clinical decisions for patients. In addition, in-depth analysis can be considered for those prescriptions that are predicted as failure or rejected. | ['Seokho Kang', 'Sungzoon Cho', 'Su-jin Rhee', 'Kyung-Sang Yu'] | Reliable prediction of anti-diabetic drug failure using a reject option | 928,946 |
REM (random exponential marking) is an active queue management algorithm that is designed to achieve a desirable equilibrium, without explicit consideration to its dynamic property especially in the presence of feedback delay. We provide sufficient conditions for REM to be locally stable around an equilibrium, in a multi-link multi-source setting, when sources have identical delay of one or two-step discrete time. | ['Qinghe Yin', 'Steven H. Low'] | On stability of REM algorithm with uniform delay | 446,056 |
We review key challenges of developing spoken dialog systems that can engage in interactions with one or multiple participants in relatively unconstrained environments. We outline a set of core competencies for open-world dialog , and describe three prototype systems. The systems are built on a common underlying conversational framework which integrates an array of predictive models and component technologies, including speech recognition, head and pose tracking, probabilistic models for scene analysis, multiparty engagement and turn taking, and inferences about user goals and activities. We discuss the current models and showcase their function by means of a sample recorded interaction, and we review results from an observational study of open-world, multiparty dialog in the wild. | ['Dan Bohus', 'Eric Horvitz'] | Dialog in the open world: platform and applications | 101,721 |
The First Cross-Script Code-Mixed Question Answering Corpus. The First Cross-Script Code-Mixed Question Answering Corpus.
2. "Can I have a glass of wine during the event?" (The First Cross-Script Code).
3. "Yes, the glass was spilled. Are you okay?" (The First Cross-Script Code).
4. "Is it possible that your blood is on my shirt?" (The First Cross-Script Code).
5. "I'm sorry. I'm sorry I | ['Somnath Banerjee', 'Sudip Kumar Naskar', 'Paolo Rosso', 'Sivaji Bandyopadhyay'] | The First Cross-Script Code-Mixed Question Answering Corpus. | 979,178 |
This paper demonstrates the prevalence of a shared characteristic between visualizations and images of nature. We have analyzed visualization competitions and user studies of visualizations and found that the more preferred, better performing visualizations exhibit more natural characteristics. Due to our brain being wired to perceive natural images [SO01], testing a visualization for properties similar to those of natural images can help show how well our brain is capable of absorbing the data. In turn, a metric that finds a visualization's similarity to a natural image may help determine the effectiveness of that visualization. We have found that the results of comparing the sizes and distribution of the objects in a visualization with those of natural standards strongly correlate to one's preference of that visualization. | ['Steve Haroz', 'Kwan-Liu Ma'] | Natural visualizations | 836,960 |
Path planning in the presence of dynamic obstacles is a challenging problem due to the added time dimension in search space. In approaches that ignore the time dimension and treat dynamic obstacles as static, frequent re-planning is unavoidable as the obstacles move, and their solutions are generally sub-optimal and can be incomplete. To achieve both optimality and completeness, it is necessary to consider the time dimension during planning. The notion of adaptive dimensionality has been successfully used in high-dimensional motion planning such as manipulation of robot arms, but has not been used in the context of path planning in dynamic environments. In this paper, we apply the idea of adaptive dimensionality to speed up path planning in dynamic environments for a robot with no assumptions on its dynamic model. Specifically, our approach considers the time dimension only in those regions of the environment where a potential collision may occur, and plans in a low-dimensional state-space elsewhere. We show that our approach is complete and is guaranteed to find a solution, if one exists, within a cost sub-optimality bound. We experimentally validate our method on the problem of 3D vehicle navigation (x, y, heading) in dynamic environments. Our results show that the presented approach achieves substantial speedups in planning time over 4D heuristic-based A*, especially when the resulting plan deviates significantly from the one suggested by the heuristic. | ['Anirudh Vemula', 'Katharina Muelling', 'Jean Oh'] | Path Planning in Dynamic Environments with Adaptive Dimensionality | 732,981 |
With each new developer to a software development team comes a greater challenge to manage the communication, coordination, and knowledge transfer amongst teammates. Fred Brooks discusses this challenge in The Mythical Man-Month by arguing that rapid team expansion can lead to a complex team organization structure. While Brooks focuses on productivity loss as the negative outcome, poor product quality is also a substantial concern. But if team expansion is unavoidable, can any quality impacts be mitigated? Our objective is to guide software engineering managers by empirically analyzing the effects of team size, expansion, and structure on product quality. We performed an empirical, longitudinal case study of a large Cisco networking product over a five year history. Over that time, the team underwent periods of no expansion, steady expansion, and accelerated expansion. Using team-level metrics, we quantified characteristics of team expansion, including team size, expansion rate, expansion acceleration, and modularity with respect to department designations. We examined statistical correlations between our monthly team-level metrics and monthly product-level metrics. Our results indicate that increased team size and linear growth are correlated with later periods of better product quality. However, periods of accelerated team expansion are correlated with later periods of reduced software quality. Furthermore, our linear regression prediction model based on team metrics was able to predict the product's post-release failure rate within a 95% prediction interval for 38 out of 40 months. Our analysis provides insight for project managers into how the expansion of development teams can impact product quality. | ['Andrew Meneely', 'Pete Rotella', 'Laurie Williams'] | Does adding manpower also affect quality?: an empirical, longitudinal analysis | 288,148 |
Part I has provided theoretical insights on the concept of local fading memory and analyzed a purely mathematical memristor model that, under dc and ac periodic stimuli, experiences memory loss in each of the basins of attraction of two locally stable state-space attractors. This brief designs the first ever real memristor with bistable stationary dc and ac behavior. A rigorous theoretical analysis unveils the key mechanisms behind the emergence of nonunique asymptotic dynamics in this novel electronic circuit, falling into the class of extended memristors. | ['Alon Ascoli', 'Ronald Tetzlaff', 'Leon O. Chua'] | The First Ever Real Bistable Memristors—Part II: Design and Analysis of a Local Fading Memory System | 899,219 |
Since multi-label data is ubiquitous in reality, a promising study in data mining is multi-label learning. Facing with the multi-label data, traditional single-label learning methods are not competent for the classification tasks. This paper proposes a new lazy learning algorithm for the multi-label classification. The characteristic of our method is that it takes both binary relevance and shelly neighbors into account. Unlike k nearest neighbors, the shelly neighbors form a shell to surround a given instance. As a result, our method not only identifies more helpful neigh- bors for classification, but also exempts from the perplexity of choosing an optimal value for k in the lazy learning methods. The experiments carried out on five benchmark datasets demonstrate that the proposed approach outperforms standard lazy multi-label classification in most cases. | ['Huawen Liu', 'Shichao Zhang', 'Jianmin Zhao', 'Jianbin Wu', 'Zhonglong Zheng'] | A New Multi-label Learning Algorithm Using Shelly Neighbors | 573,833 |
The authors develop a theory characterizing optimal stopping times for discrete-time ergodic Markov processes with discounted rewards. The theory differs from prior work by its view of per-stage and terminal reward functions as elements of a certain Hilbert space. In addition to a streamlined analysis establishing existence and uniqueness of a solution to Bellman's equation, this approach provides an elegant framework for the study of approximate solutions. In particular, the authors propose a stochastic approximation algorithm that tunes weights of a linear combination of basis functions in order to approximate a value function. They prove that this algorithm converges (almost surely) and that the limit of convergence has some desirable properties. The utility of the approximation method is illustrated via a computational case study involving the pricing of a path dependent financial derivative security that gives rise to an optimal stopping problem with a 100-dimensional state space. | ['John N. Tsitsiklis', 'B. Van Roy'] | Optimal stopping of Markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives | 383,896 |
We prove that every YES instance of Balanced ST-Connectivity has a balanced path of polynomial length. | ['Shiva Kintali', 'Asaf Shapira'] | A Note on the Balanced ST-Connectivity | 607,332 |
Formal methods is concerned with analyzing systems formally. Here, we focus on three different systems:software systems, dynamical control systems, and biological systems. The analysis questions can be broadly classified into verification and synthesis questions. We focus on both these aspects here. Logic and logical methods play a key role in the tool sand techniques across this whole range of systems and analyses. | ['Ashish Tiwari'] | Logic in Software, Dynamical and Biological Systems | 236,109 |
We propose an algorithm for the approximation of stable and unstable fibers that applies to autonomous as well as to nonautonomous ODEs. The algorithm is based on computing the zero-contour of a specific operator; an idea that was introduced in [Huls, 2006] for discrete time systems. We present precise error estimates for the resulting contour algorithm and demonstrate its efficiency by computing stable and unstable fibers for a (non)autonomous pendulum equation in two space dimensions. Our second example is the famous three-dimensional Lorenz system for which several approximations of the two-dimensional Lorenz manifold are calculated. In both examples, we observe equally well performance for autonomously and nonautonomously chosen parameters. | ['Thorsten Hüls'] | On the Approximation of Stable and Unstable Fiber Bundles of (Non)Autonomous ODEs — A Contour Algorithm | 826,379 |
A new minimal path active contour model for boundary extraction is presented. Implementing the new approach requires four steps (1) users place some initial end points on or near the desired boundary through an interactive interface; (2) a potential searching window is defined between two end points; (3) a graph search method based on conic curves is used to search the boundary; and (4) a "wriggling" procedure is used to calibrate the contour and reduce sensitivity of the search results on the selected initial end points. The last three steps are performed automatically. In the proposed approach, the potential window systematically provides a new node connection for the later graph search, which is different from the row-by-row and column-by-column methods used in the classical graph search. Furthermore, this graph search also suggests ways to design a "wriggling" procedure to evolve the contour in the direction nearly perpendicular to itself by creating a list of displacement vectors in the potential window. The proposed minimal path active contour model speeds up the search and reduces the "metrication error" frequently encountered in the classical graph search methods e.g., the dynamic programming minimal path (DPMP) method. | ['Chao Han', 'Thomas S. Hatsukami', 'Jenq Neng Hwang', 'Chun Yuan'] | A fast minimal path active contour model | 507,363 |
Emerging scientific fields are commonly identified by different citation based bibliometric parameters. However, their main shortcoming is the existence of a time lag needed for a publication to receive citations. In the present study, we assessed the relationship between the age of references in scientific publications and the change in publication rate within a research field. Two indices based on the age of references are presented, the relative age of references and the ratio of references published during the preceding 2 years, and applied thereafter on four datasets from the previously published studies, which assessed eutrophication research, sturgeon research, fisheries research, and the general field of ecology. We observed a consistent pattern that the emerging research topics had a lower median age of references and a higher ratio of references published in the preceding 2 years than their respective general research fields. The main advantage of indices based on the age of references is that they are not influenced by a time lag, and as such they are able to provide insight into current scientific trends. The best potential of the presented indices is to use them combined with other approaches, as each one can reveal different aspects and properties of the assessed data, and provide validation of the obtained results. Their use should be however assessed further before they are employed as standard tools by scientists, science managers and policy makers. | ['Ivan Jarić', 'Jelena Knežević-Jarić', 'Mirjana Lenhardt'] | Relative age of references as a tool to identify emerging research fields with an application to the field of ecology and environmental sciences | 483,857 |
Reliable decoding of a user's intention is a key step to control prosthetic devices. Force myography (FMG) is often used to assess topographic force patterns resulting from volumetric changes of activated muscles. However, during limb position changes this approach may give deteriorating performance over time. To address this limitation, we developed a position-aware platform that integrates an inertial measurement unit (IMU) and a force sensing array (FSA) with an advanced signal processing module. The module analyzes data using an artificial neural network (ANN) to predict an intended hand movement. Our results demonstrate that by utilizing multi-sensory information this decoding strategy provides a 90% accuracy. | ['Mahdi Rasouli', 'Karthik Chellamuthu', 'John-John Cabibihan', 'Sunil L. Kukreja'] | Towards enhanced control of upper prosthetic limbs: A force-myographic approach | 858,422 |
Understanding the change in retail structure has been a distinct challenge for many managers and policy analysts since the 1950s. Research has focused on concepts such as the wheel of retailing. However, this theory is more descriptive than explanatory of changes in market structure. In this paper we argue that changes in retail structure (discount stores, specialist stores, department stores and even malls versus online shopping), can be modelled using the ecological simulation concept of competing sessile species, with different growth rates and overgrowth rates based on changing suitability to the environment. Our results show that the application of the COMPETE model [see 1, 2] produce greater and a different diversity of retailers in larger compared to smaller shopping malls. | ['Roderick Ducan', 'Terry Bossomaier', "Steven D'Alessandro", 'Craig R. Johnson', 'Kathyrn French'] | Using the Simulation of Ecological Systems to Explain the Wheel of Retailing | 601,460 |
Abstract: In this study, Doppler signals recorded from the internal carotid artery (ICA) of 97 subjects were processed by personal computer using classical and model-based methods. Fast Fourier transform (classical method) and autoregressive (model-based method) methods were selected for processing the ICA Doppler signals. The parameters in the autoregressive method were found by using maximum likelihood estimation. The Doppler power spectra of the ICA Doppler signals were obtained by using these spectral analysis techniques. The variations in the shape of the Doppler spectra as a function of time were presented in the form of sonograms in order to obtain medical information. These Doppler spectra and sonograms were then used to compare the applied methods in terms of their frequency resolution and the effects in determination of stenosis and occlusion in the ICA. Reliable information on haemodynamic alterations in the ICA can be obtained by evaluation of these sonograms. | ['Elif Derya Übeyli'] | Feature extraction by autoregressive spectral analysis using maximum likelihood estimation: internal carotid arterial Doppler signals | 424,956 |
Etat de l'art : Extraction de connaissances à partir de thesaurus pour générer une ontologie. Etat de l'art : Extraction de connaissances à partir de thesaurus pour générer une ontologie. Etat de l'art de la nuit qui est réjà la sécurité télégème. In novembre de thesaurus à la sécurité donde à l'art de la nuit, qu'ils se répondit mais des débats. — | ['Fabien Amarger', 'Catherine Roussey', 'Jean-Pierre Chanet', 'Ollivier Haemmerlé', 'Nathalie Hernandez'] | Etat de l'art : Extraction de connaissances à partir de thesaurus pour générer une ontologie. | 746,234 |
Multi-agent systems (MAS) are one of the complex applications of distributed artificial intelligence. They are prone to different kinds of exception due to their characteristic of operating in a complex and dynamic environment. The dynamism and unpredictable nature of an open environment gives rise to unpredictable exceptions. It becomes essential to have some exception diagnosis mechanisms in place to be able to diagnose the cause of such exceptions and to execute proper recovery plans. These mechanisms do come with some overheads. In this paper, we present an empirical evaluation of our proposed sentinel based approach to exception diagnosis in an open MAS and also discuss the trade offs in using a sentinel based approach to exception diagnosis in an MAS. | ['Nazaraf Shah', 'Kuo-Ming Chao', 'Nick Godwin', 'Anne E. James', 'C-F Tasi'] | An empirical evaluation of a sentinel based approach to exception diagnosis in multi-agent systems | 69,030 |
A fully automatic mesh segmentation scheme using heterogeneous graphs is presented. We introduce a spectral framework where local geometry affinities are coupled with surface patch affinities. A heterogeneous graph is constructed combining two distinct graphs: a weighted graph based on adjacency of patches of an initial over-segmentation, and the weighted dual mesh graph. The partitioning relies on processing each eigenvector of the heterogeneous graph Laplacian individually, taking into account the nodal set and nodal domain theory. Experiments on standard datasets show that the proposed unsupervised approach outperforms the state-of-the-art unsupervised methodologies and is comparable to the best supervised approaches. | ['Panagiotis Theologou', 'Ioannis Pratikakis', 'Theoharis Theoharis'] | Unsupervised Spectral Mesh Segmentation Driven by Heterogeneous Graphs | 703,280 |
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given single-letter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary. Moreover, the algorithm is universal also in a semi-stochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise. The proposed denoising algorithm is practical, requiring a linear number of register-level operations and sublinear working storage size relative to the input data length. | ['Tsachy Weissman', 'Erik Ordentlich', 'Gadiel Seroussi', 'Sergio Verdu', 'Marcelo J. Weinberger'] | Universal discrete denoising: known channel | 555,453 |
Fisher linear discriminant analysis(FLDA) is a classical and important algorithm for face recognition. However, the FLDA will fail when there have only one sample each object, because the intra-class scatter matrices cannot be calculated. In this paper, an adaptive virtual sample generation method based on singular value decomposition(SVD) is proposed to solve one sample problem in face recognition. By using SVD, an approximation image is reconstructed, then, combining single training image and its approximation image to get virtual image. Every class has two samples: single sample and its virtual sample. The FLDA can be used to extract feature. The major contribution of the proposed method is that it adaptively construct virtual sample image based on energy distribution of different image. Experimental results show that the proposed method is efficient and have a higher recognition accuracy than based SVD and other existing algorithm. | ['Jin Liu', 'A. Pengren', 'Hong Bing Ji'] | An adaptive virtual sample generation method for one sample problem in face recognition | 967,092 |
History: Women who read the stars History: Women who read the stars as an object of lust for power, or a child who watched herself burn in an abusive sex life.
This is my favorite book in the genre of fantasy. I like the whole concept of a woman trapped in a lustful, abusive, and dangerous environment with no one to help her. We must learn to be self-sufficient, but also to respect, love, and be responsible for her body and mind. I am now a full-time writer | ['Sue Nelson'] | History: Women who read the stars | 946,086 |
This paper presents an outline of an Ontological and Semantic understanding-based model (SEMONTOQA) for an open-domain factoid Question Answering (QA) system. The outlined model analyses unstructured English natural language texts to a vast extent and represents the inherent contents in an ontological manner. The model locates and extracts useful information from the text for various question types and builds a semantically rich knowledge-base that is capable of answering different categories of factoid questions. The system model converts the unstructured texts into a minimalistic, labelled, directed graph that we call a Syntactic Sentence Graph (SSG). An Automatic Text Interpreter using a set of pre-learnt Text Interpretation Subgraphs and patterns tries to understand the contents of the SSG in a semantic way. The system proposes a new feature and action based Cognitive Entity-Relationship Network designed to extend the text understanding process to an in-depth level. Application of supervised learning allows the system to gradually grow its capability to understand the text in a more fruitful manner. The system incorporates an effective Text Inference Engine which takes the responsibility of inferring the text contents and isolating entities, their features, actions, objects, associated contexts and other properties, required for answering questions. A similar understanding-based question processing module interprets the user's need in a semantic way. An Ontological Mapping Module, with the help of a set of pre-defined strategies designed for different classes of questions, is able to perform a mapping between a question's ontology with the set of ontologies stored in the background knowledge-base. Empirical verification is performed to show the usability of the proposed model. The results achieved show that, this model can be used effectively as a semantic understanding based alternative QA system. | ['Mohammad Moinul Hoque', 'Paulo Quaresma'] | SEMONTOQA: A Semantic Understanding-Based Ontological Framework for Factoid Question Answering | 648,507 |
We investigate fundamental decisions in the design of instruction set architectures for linear genetic programs that are used as both model systems in evolutionary biology and underlying solution representations in evolutionary computation. We subjected digital organisms with each tested architecture to seven different computational environments designed to present a range of evolutionary challenges. Our goal was to engineer a general purpose architecture that would be effective under a broad range of evolutionary conditions. We evaluated six different types of architectural features for the virtual CPUs: (1) genetic flexibility: we allowed digital organisms to more precisely modify the function of genetic instructions, (2) memory: we provided an increased number of registers in the virtual CPUs, (3) decoupled sensors and actuators: we separated input and output operations to enable greater control over data flow. We also tested a variety of methods to regulate expression: (4) explicit labels that allow programs to dynamically refer to specific genome positions, (5) position-relative search instructions, and (6) multiple new flow control instructions, including conditionals and jumps. Each of these features also adds complication to the instruction set and risks slowing evolution due to epistatic interactions. Two features (multiple argument specification and separated I/O) demonstrated substantial improvements in the majority of test environments, along with versions of each of the remaining architecture modifications that show significant improvements in multiple environments. However, some tested modifications were detrimental, though most exhibit no systematic effects on evolutionary potential, highlighting the robustness of digital evolution. Combined, these observations enhance our understanding of how instruction architecture impacts evolutionary potential, enabling the creation of architectures that support more rapid evolution of complex solutions to a broad range of challenges. | ['David M. Bryson', 'Charles Ofria'] | Understanding Evolutionary Potential in Virtual CPU Instruction Set Architectures | 492,346 |
A method for building flexible shape models is presented in which a shape is represented by a set of labelled points. The technique determines the statistics of the points over a collection of example shapes. The mean positions of the points give an average shape and a number of modes of variation are determined describing the main ways in which the example shapes tend to deform from the average. In this way allowed variation in shape can be included in the model. The method produces a compact flexible 'Point Distribution Model' with a small number of linearly independent parameters, which can be used during image search. We demonstrate the application of the Point Distribution Model in describing two classes of shapes. | ['Timothy F. Cootes', 'Christopher J. Taylor', 'David H. Cooper', 'Jim Graham'] | Training models of shape from sets of examples | 401,048 |
We introduce a joint decoding method for compressive sensing that can simultaneously exploit sparsity of individual components of a composite signal. Our method can significantly reduce the total number of variables decoded jointly by separating variables of large magnitudes in one domain and using only these variables to represent the domain. Furthermore, we enhance the separation accuracy by using joint decoding across multiple domains iteratively. This separation-based approach improves the decoding time and quality of the recovered signal. We demonstrate these benefits analytically and by presenting empirical results. | ['Hsieh-Chung Chen', 'H. T. Kung'] | Separation-Based Joint Decoding in Compressive Sensing | 298,549 |
We examine the capacity region of the K-user Gaussian fading broadcast channel with channel state known at the receivers but unknown at the transmitter. For binary expansion superposition signaling, we derive a new achievable rate based on soft decision decoding of the binary inputs. The approach is based on a simple tight bound on the output entropy of a high-SNR AWGN channel with a continuous uniform input. We show that a binary superposition signaling scheme is for each user within a constant gap of 5.443 b/s/Hz of the broadcast channel capacity for all fading state distributions. | ['Roy D. Yates', 'Jing Lei'] | Gaussian fading broadcast channels with CSI only at the receivers: An improved constant gap | 383,385 |
Direct volume rendering techniques map volumetric attributes (e.g., density, gradient magnitude, etc.) to visual styles. Commonly this mapping is specified by a transfer function. The specification of transfer functions is a complex task and requires expert knowledge about the underlying rendering technique. In the case of multiple volumetric attributes and multiple visual styles the specification of the multi-dimensional transfer function becomes more challenging and non-intuitive. We present a novel methodology for the specification of a mapping from several volumetric attributes to multiple illustrative visual styles. We introduce semantic layers that allow a domain expert to specify the mapping in the natural language of the domain. A semantic layer defines the mapping of volumetric attributes to one visual style. Volumetric attributes and visual styles are represented as fuzzy sets. The mapping is specified by rules that are evaluated with fuzzy logic arithmetics. The user specifies the fuzzy sets and the rules without special knowledge about the underlying rendering technique. Semantic layers allow for a linguistic specification of the mapping from attributes to visual styles replacing the traditional transfer function specification. | ['Peter Rautek', 'Stefan Bruckner', 'M. Eduard Gröller'] | Semantic Layers for Illustrative Volume Rendering | 221,281 |
Early order commitment (EOC) is a strategy for supply chain coordination, wherein the retailer commits to purchasing from a manufacturer a fixed order quantity a few periods in advance of the regular delivery lead time. In this paper, we formulate and analyze the EOC strategy for a decentralized, two-level supply chain consisting of a single manufacturer and multiple retailers, who face external demands that follow an autocorrelated AR(1) process over time. We characterize the special structure of the optimal solutions for the retailers' EOC periods to minimize the total supply chain cost and discuss the impact of demand parameters and cost parameters. We then develop and compare three solution approaches to solving the optimal solution. Using this optimal cost as the benchmark, we investigate the effectiveness of using the wholesale price-discount scheme for the manufacturer to coordinate this decentralized system. We give numerical examples to show the benefits of EOC to the whole supply chain, examine the efficiency of the discount scheme in general situation, and provide the special conditions when the full coordination is achieved. | ['Jinxing Xie', 'Deming Zhou', 'Jerry C. Wei', 'Xiande Zhao'] | A note on Price discount based on early order commitment in a single manufacturer-multiple retailer supply chain | 505,601 |
In this paper, we present a new multicast architecture and the associated multicast routing protocol for providing efficient and flexible multicast services over the Internet. Traditional multicast architectures construct and update the multicast tree in a distributed manner, which causes two problems: first, since each node has only local or partial information on the network topology and group membership, it is difficult to build an efficient multicast tree; second, due to lack of the complete information, broadcast is often used when transmitting control packets or data packets, which consumes a great deal of network bandwidth. In the newly proposed multicast architecture, a few powerful routers, called m-routers, collect multicast-related information and process multicast requests based on the information collected. m-routers handle most of multicast related tasks, while other routers only need to perform minimum functions for routing. m-routers are designed to be able to handle simultaneous many-to-many communications efficiently. The new multicast routing protocol, called Service Centric Multicast Protocol (SCMP), builds a dynamic shared multicast tree rooted at the m-router for each group. The multicast tree can satisfy the QoS constraint on maximum end-to-end delay and minimize tree cost as well. The tree construction is performed by a special type of self-routing packets to minimize protocol overhead. Our simulation results on NS-2 demonstrate that the new SCMP protocol outperforms other existing protocols and is a promising alternative for providing efficient and flexible multicast services over the Internet. | ['Yuanyuan Yang', 'Jianchao Wang', 'Min Yang'] | A Service-Centric Multicast Architecture and Routing Protocol | 233,761 |